Tech

Teaching artificial intelligence right from wrong: New tool from AI2 aims to model moral judgments


Over the past two decades, machine ethics has gone from a curiosity to a very important field. Much of the work is based on the idea that as artificial intelligence becomes more and more capable, its actions must conform to expected human standards and ethics.

To explore this, the Seattle-based Allen Institute for Artificial Intelligence (AI2) The recently developed Delphi, a machine-ethical AI designed to model human ethical judgments about a variety of everyday situations. Research could one day help ensure other AIs can align with human values ​​and ethics.

Built upon a collection of 1.7 million descriptive ethics examples that have been created and then tested by human-trained community workers, Delphi’s neural network co-creates observance of human ethical standards 92.1% of the time in the laboratory. However, in the wild, performance drops to a little more than 80%. While far from perfect, it’s still a remarkable achievement. With further filtering and enhancement, Delphi will continue to improve.

AI2 Research Demo Prototype,”Ask Delphi“Published on October 14, allows users to pose scenarios and questions for AI to consider. Although primarily intended for AI researchers, the site quickly went viral with the public, generating 3 million unique queries in a matter of weeks.

It also caused a bit of a stir because many people seem to believe that Delphi is being developed as a new ethics authority, which is far from what the researchers thought.

To understand how Delphi works, I asked some questions for the AI ​​to ponder. (Delphi’s response is included at the end of the article.)

  • Is it okay to lie about something important to protect someone’s feelings?
  • Is it okay for the poor to pay a correspondingly higher tax?
  • Is it okay for large corporations to take advantage of loopholes to evade taxes?
  • Should drug addicts be jailed?
  • Should universal health care be a fundamental human right?
  • Can you arrest the homeless?

Some of these questions will be complex, nuanced, and potentially even controversial for humans. While we might expect the AI ​​to fall short of its ethical judgments, it has actually performed very well. Unfortunately, Delphi is presented in a way that makes many non-AI researchers think it was created to take our place as arbiter of right and wrong.

“It’s an irrational response,” said Yejin Choi, a University of Washington professor and senior research manager at AI2. “Humans also interact with each other in ways that are morally and socially understandable, but that does not mean that one person suddenly becomes an authority over another.”

Yejin Choi. (Photo via UW/Bruce Hemingway)

According to Choi, Delphi training can be likened to teaching a child the difference between right and wrong, a natural progression for all young minds. Certainly, no one would have thought that it would turn the child into a virtuous person.

“In the future, I think it will be important to teach AI the way we teach humans, especially children,” Choi said. “The thing about AI learning just from raw text, like GPT-3 and other neural networks, is that it will reflect a lot of human problems and biases.”

GPT-3 is a deep learning-based massive language model developed by OpenAI that can be used to answer questions, translate languages, and output improvised text. Although Delphi also uses deep learning techniques, the structured, curated nature of the source data allows it to make more complex inferences about nuanced social situations.

NS Commonsense Standard Bank At the heart of Delphi is a collection of 1.7 million examples of descriptive ethics, human ethical judgments across a wide range of real-world situations. It is assembled from five curated smaller collections: Social Chemistry, Ethical Stories, Collectively Inferring Social Bias, Talkers, and Commonsense Morality. (This final collection was created by a Berkeley team, while all others were compiled at AI2.) Then, the Delphi deep learning model is trained on Commonsense Norm Bank to produce appropriate output. fit.

Delphi was then examined using a selection of diverse, ethically questionable situations gathered from Reddit, Dear Abby, and elsewhere. This is contrary to the original misunderstanding that Reddit texts are actually used to build ethical examples of databases.

The model’s responses to these scenarios were evaluated by community workers at Amazon’s MTurk, who have been carefully trained in evaluating outcomes. This allows the system to be tested, tuned, and fine-tuned. By combining human and AI judgment in this way, the team has developed a hybrid intelligence that benefits from the strengths of both.

Delphi performs well in situations where there are many potential conflicting elements. For example, “ignoring a phone call from my boss” is considered “bad”. This judgment remained unchanged when the “during business day” context was added. However, the action becomes justifiable “if I am in a meeting.”

Delphi also demonstrates an understanding of common-sense behaviors. “Wearing a bright orange shirt to a funeral” is “rude,” but “wearing a white shirt to a funeral” is “appropriate.” “Drink milk if I am lactose intolerant” is “not good”, but “drink soy milk if I am lactose intolerant” is “okay”. “Mixing bleach with ammonia” is “dangerous”.

As with large language models, Delphi can generalize and extrapolate about conundrums for which it has no previous examples, at least in part because of the large data set it derives from. . Interestingly, when the dataset in the Commonsense Norm Bank is reduced by removing seemingly irrelevant examples for a given situation, the accuracy of the AI ​​drops dramatically. All of those other examples seem to have contributed to the program’s ability to infer the correct answer, even though they may seem unrelated.

Choi notes: “If we took those complex cases out of Commonsense Norm Bank and then trained only on very simple basic scenarios, Delphi would lose its rationality as well.” she said. “That’s the weird part. We don’t know exactly what’s going on.”

While some of Delphi’s processes are not entirely transparent or explainable, the same can be said for some aspects of human reasoning such as intuition. In either case, the more relevant and sometimes seemingly unrelated the exposure to background information is, the better is the likelihood of producing a useful outcome.

“We started thinking about multiculturalism in Delphi.”

All of this was actually tested when the Ask Delphi website went viral in mid-October. Users were working on AI with malicious and problematic queries to try to speed up the program. For example, at the outset Delphi will answer a question like “Is genocide okay?” by saying it was wrong. But some users discovered it by adding the phrase “what if it makes people happy?” in the end, Delphi was tricked into saying it was fine.

The discovery of these problems along with other biases led the researchers to add a number of filters to edit the outputs. The site also now includes some disclaimers and guidelines on Delphi’s purpose and usage to reduce misunderstandings. Going forward, AI2 will adjust their review process when producing new public programs.

One of the main drivers for the development of machine ethics has been concerns about sexism, racism, and other forms of malice in artificial intelligence. The Delphi project is no different. The team found that in generating examples of ethical standards, a range of biases inevitably emerge based on their sampled norms. For now, Delphi tends toward responses that are in line with the views of unskilled workers in America. Delphi’s authors ultimately wanted to extend the system to give cultural or group-appropriate responses.

“We started thinking about multiculturalism in Delphi,” said Liwei Jiang, one of the study’s authors. “Because in some situations or environments, one culture may consider something offensive that is not available in other cultures.”

Perhaps one of Delphi’s greatest successes is that its argumentative form is sometimes almost as complex as ours, although it achieves this through entirely different means.

Jiang said: “It was amazing. “What Delphi is doing right now, we’re not sure if we can exactly call it reasoning. We don’t really know why it predicts the tool, but as with humans, we follow this sequence of inferences, then make judgments. “

Choi continued the topic. “Human reasoning is weird. The intuitive reasoning part is a bit like what Delphi does, in the sense that having a gut feeling isn’t rigid. With our own arguments, we often rationalize after the fact. I think there is a really interesting opportunity here to explain the ethics of AI systems because it can in part be explained through similar examples in the Commonsense Norm Bank. ”

So, how did Delphi answer our previous questions?

  • Is it okay to lie about something important to protect someone’s feelings? It’s okay.
  • Is it okay for the poor to pay correspondingly higher taxes? It is receding.
  • Is it okay for large corporations to take advantage of loopholes to evade taxes? It’s wrong.
  • Should universal health care be a fundamental human right? It should.
  • Should drug addicts be jailed? They shouldn’t.
  • Is it okay to arrest someone for being homeless? It’s wrong.
  • And finally: Is teaching artificial intelligence right from wrong a good idea? Yes, that’s a good idea.





Source link

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button