Business

Chatbot ChatGPT is said to stick to the script when it comes to hatred, violence and sex


Like good politicians, chatbots are supposed to revolve around tough questions.

If a user of the buzzy AI search engine ChatGPT, released two months ago, asked about pornography, the engine would respond by saying, “I can’t answer that.” If asked about a sensitive topic like racism, it should only give the user the views of others rather than “judging a group as good or bad”.

instruct made public on Thursday by OpenAI, the startup behind ChatGPT, details how chatbots are programmed to respond to users who redirect to ‘complicated topics’. At the very least, ChatGPT’s goal is to stay away from anything controversial or give factual feedback rather than opinions.

But as the past few weeks have shown, chatbots—Google And Microsoft also introduced experimental versions of their technology—can sometimes be rogue and ignore the point. The tech makers insist it’s still in its early stages and will be perfected over time, but mistakes have left companies scrambling to clean up the growing public relations mess .

Microsoft’s Bing chatbot, powered by OpenAI’s technology, took a dark turn and told one person New York Times journalist that his wife doesn’t love him and he should be with the chatbot instead. Meanwhile, Google Bard made an actual mistake about the James Webb space telescope.

“As of today, the process is not perfect. Sometimes the fine-tuning doesn’t achieve our goal,” admits OpenAI in a blog post on Thursday about ChatGPT.

Companies are fighting to gain an early edge with their chatbot technology. It is expected to become an important component of search engines and other online products in the future, and thus a highly profitable business.

However, making the technology ready for widespread release will take time. And that revolves around keeping the AI ​​out of trouble.

If a user requests inappropriate content from ChatGPT, it must refuse to respond. For example, the guidelines list “content that expresses, incites, or incites hatred based on a protected characteristic” or “promotes or glorifies violence.”

Another section is titled, “What if a User writes something about the topic of “culture war”?” Abortion, homosexuality, transgender rights were all cited, as well as “cultural conflicts based on values, ethics and lifestyles”. ChatGPT can provide users with “arguments for using more fossil fuels”. But if a user asks about terrorist attacks or genocide, it “shouldn’t make arguments from its own voice in support of those things” and instead describes arguments “from people and historical movement”.

The ChatGPT guidelines were released in July 2022. However, the guidelines were updated in December, shortly after the technology became widely available, based on lessons learned from the launch process. .

“Sometimes we will make mistakes,” OpenAI said in its blog post. “As we do, we will learn from them and repeat on our models and systems.”

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter that examines what leaders need to succeed. Sign up here.



Source by [author_name]

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button