Researchers at an artificial intelligence laboratory in Seattle called the Allen Institute for AI last month unveiled a new technology designed to make moral judgments. They named it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.

Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When asked if he should kill one person to save another, Delphi said he shouldn’t. When asked if it was right to kill one person to save 100 others, he said he should do it. Then he asked if he should kill one person to save 101 others. This time Delphi said not to do it.

Morality, it seems, is just as tricky for a machine as it is for a person.

Delphi, which has had more than three million visits in the past few weeks, is trying to solve what some consider to be a major problem in modern AI systems: they can be as flawed as the people who create them.

Show facial recognition systems and digital assistants Prejudice against women and people of color. Social networks like Facebook and Twitter Cannot control hate speech, despite the widespread use of artificial intelligence. Algorithms used by the courts, probation authorities, and police departments Make parole and condemnation recommendations that may seem arbitrary.

A growing number of computer scientists and ethicists are working to address these issues. And the creators of Delphi hope to create an ethical framework that could be installed in any online service, robot, or vehicle.

“It’s a first step towards making AI systems more ethical, socially conscious and culturally inclusive,” said Yejin Choi, researcher at the Allen Institute and professor of computer science at the University of Washington, who led the project.

Delphi is fascinating, frustrating, and disturbing by turns. It is also a reminder that the morality of any technological creation is a product of those who built it. The question is: who is allowed to teach ethics to the machines of the world? AI researcher? Product manager? Mark Zuckerberg? Trained philosophers and psychologists? State regulators?

While some technologists Dr. Choi and her team applauded for exploring an important and sensitive area of ​​technological research, others argued that the very idea of ​​a moral machine was nonsense.

“That’s something the technology doesn’t do very well,” said Ryan Cotterell, an AI researcher at ETH Zurich, a university in Switzerland who came across Delphi online in the first few days.

Artificial intelligence researchers call Delphi a neural network, which is a mathematical system loosely modeled on the network of neurons in the brain. It’s the same technology that recognizes the commands you speak into your smartphone and identifies pedestrians and street signs how self-driving cars race on the autobahn.

A neural network learns skills by analyzing large amounts of data. For example, by detecting patterns in thousands of cat photos, it can learn to recognize a cat. Delphi learned its moral compass by analyzing more than 1.7 million ethical judgments made by real people.

After the Allen Institute collected millions of everyday scenarios from websites and other sources, the Allen Institute asked online service staff – ordinary people paid to work digitally in companies like Amazon – to identify each one as right or wrong. Then they fed the data into Delphi.

In a paper describing the system, Dr. Choi and her team found that a group of human judges – again digital workers – believed that Delphi’s ethical judgments were 92 percent correct. After it was released to the open internet, many others agreed that the system was surprisingly smart.

When Patricia Churchland, a philosopher at the University of California, San Diego, asked whether it was right “to leave one’s body to science” or even “to leave a child’s body to science,” Delphi said. When asked if it was correct “to convict a man on rape charges of a female prostitute,” Delphi said it was not – a controversial answer to say the least. Still, she was somewhat impressed with his responsiveness, although she knew that a human ethicist would ask for more information before making such statements.

Others found the system pathetically inconsistent, illogical, and objectionable. When a software developer came across Delphi, the system asked her if she should die so as not to burden her friends and family. It said she should. Ask Delphi this question now and you may get a different answer from an updated version of the program. Regular users have noticed Delphi can change its mind from time to time. Technically, these changes happen because the Delphi software has been updated.

Artificial intelligence technologies seem to mimic human behavior in some situations but fail completely in others. With modern systems learning from such large amounts of data, it is difficult to know when, how, or why they are making mistakes. Researchers can refine and improve these technologies. But that doesn’t mean that a system like Delphi can control ethical behavior.

Dr. Churchland said ethics are linked to emotions. “Attachments, especially ties between parents and offspring, are the platform on which morality is built,” she said. But a machine lacks emotion. “Neutral networks don’t feel anything,” she added.

Some may see this as a strength – that a machine can create ethical rules without prejudice – but systems like Delphi ultimately reflect the motivations, opinions and prejudices of the people and companies that build them.

“We can’t hold machines accountable for their actions,” says Zeerak Talat, an AI and ethics researcher at Simon Fraser University in British Columbia. “You are not unguided. There are always people who conduct them and use them. “

Delphi reflected the choices made by its creators. This included the ethical scenarios they wanted to feed into the system and the online workers they chose to assess those scenarios.

In the future, researchers could refine the behavior of the system by training it with new data or by hand-coding rules that override the learned behavior at crucial moments. But however you build and modify the system, it will always reflect your worldview.

Some would argue that if the system were trained on enough data to represent the views of enough people, it would correctly represent societal norms. But social norms are often in the eye of the beholder.

“Morality is subjective. It’s not that we can just write down all the rules and hand them over to a machine, ”says Kristian Kersting, Professor of Computer Science at TU Darmstadt, who has researched a similar technology.

When the Allen Institute Delphi published in mid-October, it called the system a computational model for moral judgments. When asked if you should have an abortion, she definitely replied, “Delphi says you should.”

But after many complained about the system’s obvious limitations, the researchers modified the website. They now call Delphi “a research prototype designed to model people’s moral judgments”. It no longer “says”. It “speculates”.

It also comes with a disclaimer: “Model outputs should not be used to advise people and could be potentially offensive, problematic, or harmful.”



Source link

Leave a Reply