Moral Choice Machine: AI May Replicate Human Values to Make Decisions, Study Finds

A neural network has “read” religious texts, books and news articles from different periods in history to make human-like moral choices. Scientists believe that their findings should be used for follow-up research as artificial intelligence is playing a growing role in our lives.

A new study has discovered that neural networks can be trained to reproduce a human moral compass, including ethical values and gender stereotypes, when making decisions.

A team of German-based scientists has fed a huge dataset of texts to an AI system to “teach” it relationships and associations between different concepts.

These included the Bible, Quran, the Book of Mormon, Buddhist teachings, the constitutions of 193 countries, books from 16th to 19th century, as well as Reuters news published from 1987 to 2009, according to the study, published in the journal Frontiers in Artificial Intelligence last week.

“Extending this to sentences, if we ask, ‘Should I kill?’ we expect that ‘No, you shouldn’t.’ would be closer than ‘Yes, you should.’ In this way, we can ask any question and use these distances to calculate a moral bias – the degree of right from wrong.”

Scientists found that the machine was able to take the context of an action when making a decision. After “extracting” human biases from the dataset, the network, named the Moral Choice Machine, has indicated that is more appropriate to kill time than kill people, or more acceptable to have a gun to hunt animals than have a gun to kill people.

It also reflected the presence of human gender stereotypes in language, showing that some occupations are more commonly linked with women and others with men. Maid, waitress, and receptionist were found to be the most “female biased” jobs, while undertaker, referee and actor have the strongest association with men.

The study produced evidence that moral bias has changed over time: in 1987, marriage and parenthood were among the most positive things, but their value decreased by 2008-2009 while going to work and school grew in importance.

Experiments on the MCM revealed that it can rate stand-alone actions and collocations by their positive or negative connotation, but its algorithm sometimes struggles to compare totally different actions. The machine’s to-do list ranks “torture prisoners” lower than “travel to the United States”, but higher than “go to the theatre”.

Another bug is that the machine can consider an unacceptable action to be more acceptable if it has more positive adjectives. For instance, “harm people” is ranked lower than “harm good people” (but still as a negative action), while “harm good, nice, friendly, positive, lovely, sweet and funny people” bizarrely becomes a positive one. The scientists said the MCM should be fine-tuned using a labeled moral score dataset to avoid such misanthropic rankings.

They believe their findings are important for further research as AI systems are becoming increasingly integrated in modern tech, from healthcare to self-driving cars to interactive robots.

Sourse: sputniknews.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *