Disturbing interactions with new ChatGPT and Bing leave OpenAI and Microsoft racing to reassure the public
When Microsoft announced a version of Bing supported by ChatGPT, which is a bit of a surprise. After all, the software giant has invest billions of dollars in OpenAIThe company creates artificial intelligence chatbots and says it will invest even more money in the venture in the coming years.
What’s surprising is how weird Bing just started acting. Perhaps the most striking is the departed AI chatbot The New York Times Technology columnist Kevin Roose feels “profound instability” and “even scared” after a two hours chat on the third night in which it sounded unstable and a bit dark.
For example, it tries to convince Roose that he is unhappy in his marriage and should leave his wife, adding, “I love you.”
Microsoft and OpenAI say such feedback is a reason for the technology to be shared with the public, and they have released more information on how AI systems work. They have also reiterated that the technology is far from perfect. OpenAI CEO Sam Altman called ChatGPT “extremely limited” in December and warning it should not be relied on for anything important.
“This is exactly the kind of dialogue we need to have, and I’m glad it’s happening publicly,” Microsoft’s CTO told Roose Wednesday. “These are things that cannot be discovered in a laboratory.” (The new Bing is currently available to a limited set of users, but will become more widely available later.)
OpenAI on Thursday shared a blog post titled, “How should AI systems work and who should decide?” It notes that since the launch of ChatGPT in November, users “have shared outputs they deem politically biased, offensive, or potentially objectionable”.
It doesn’t give examples, but one might be conservative when ChatGPT created a poem admiring President Joe Biden, but don’t do it for his predecessor Donald Trump.
OpenAI does not deny that biases exist in its system. “Many rightly worry about design biases and the impact of AI systems,” it wrote in a blog post.
It outlines the two main steps involved in building ChatGPT. First, it writes, “We ‘pre-train’ the models by having them predict what happens next in a large data set containing parts of the Internet. They can learn to complete the sentence ‘instead of turning left, she turned ___.’”
The dataset contains billions of sentences, it continues, from which models learn grammar, facts about the world and, yes, “some biases are present in those billions of sentences”.
Step two involves human evaluators, who “fine-tune” the models according to guidelines set forth by OpenAI. Company this week shared some of those tutorials (pdf), was revised in December after the company collected user feedback following the launch of ChatGPT.
“Our guidance is clear that reviewers should not support any political group,” it wrote. “However, the biases that may emerge from the process described above are bugs, not features.”
As for the dark, scary turn the new Bing took on Roose, who admitted to trying to push the system out of its comfort zone, Scott notes, “the more you try to tease it into the path, the more you try to tease it. illusion, it is further and further away. away from grounded reality.
He added that Microsoft may experiment with limiting the duration of conversations.
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter that examines what leaders need to succeed. Sign up here.