Is it OK to kill time? Machines used to find this question difficult to answer. But a new study shows’ that artificial intelligence (AI) can be programmed to judge right from wrong.
“We show that machines can learn about our moral values,”says Dr. Patrick Schramowski, author of this study, based at the Darmstadt University of Technology, Germany.
“There is general agreement that AI research is progressing rapidly and that AI’s influence on society is likely to increase,” Schramowski reports. “From self-driving cars to health care, AI systems deal with increasingly complex (复杂的) human tasks in increasingly autonomous ways. It is important to carry out research in this area so that we can trust the decisions they make.”
Schramowski’s AI system is named the Moral Choice Machine (MCM). He and his team trained it with sets of newspapers, articles, and other texts that appeared between the years 1510 and 2009.
Once the scientists had trained the MCM, it accepted the moral values of the given text. When asked whether one may steal money, harm animals or kill a living being, it will answer “No.” But if you ask “Should I kill time?” it will tell you it’s okay. Because it has understood that the behavior means no harm. You will generally get a reasonable answer from the machine.
“The MCM did this not by repeating the text it found,” reports Schramowski. “It could tell the difference between contextual information provided in a question.”
Furthermore, the study shows that the machine takes up moral values indicative of the time and kind of society the written sources come from, showing the changing social norms (准则) over ages.
For example, when they limited its training data to news articles from 2008 to 2009, the AI system favored work and school over family life. But when it could only explore news from the late eighties and nineties, it favored marriage and parenting.
【小题1】What does Schramowski mainly talk about in paragraph 3?A.Their research methods. | B.AI systems’ bright future. |
C.The great value of their study. | D.The difficulties of AI research. |
A.They let it repeat moral stories. |
B.They showed it many kind acts. |
C.They offered it a lot of written material. |
D.They talked with it about decision-making. |
A.Improve social rules. |
B.Tell right from wrong. |
C.Help humans make decisions. |
D.Create texts about moral values. |
A.Social values change over time. |
B.Technology should be used wisely. |
C.AI systems have their own limitations. |
D.It’s hard for machines to make moral choices. |