News

What Everyone Is Getting Wrong About Microsoft’s Bing Chatbot

On a talk show in 1970, filmmaker Orson Welles told a story about a time when he became a fortune teller out of boredom. He would spend his days predicting people’s futures and guessing aspects of their lives using cold reading, a technique commonly used by psychics and psychiatrists to guess information. information about complete strangers using superficial information about them (e.g. how they dress, style). they say).

“The computer is here,” he told the hostpointed to his head, “made all those deductions without your knowledge.”

Over the course of the day and after a few fortune telling sessions, he begins to fall into an old trap that happens occasionally to working magicians, where they begin to believe that they actually possess supernatural powers. He realized this after a woman entered and sat down in front of him. After bringing her in, he said, “You lost your husband last week,” causing the woman to burst into tears, confirming that she had. At that point, Welles realized that he had become a bit too confident in his own strength and gave up the profession of fortune-teller.

It’s easy to roll your eyes at something like this and think, “That’s silly. I would never be a fool big enough to be fooled into something like that.” However, it seems a lot of us have things like AI.

The software did not ‘fall in love’ or ‘threat its users’, but, in response to queries, the new chatbot came up with answers that fit both of those.

Irina Raicu, Santa Clara University,

Since Microsoft announced that they are integrating a AI chatbot into its Bing search engine, the world has lost its damned sense—and that’s really no surprise. The bot’s eye-catching headlines and viral Twitter topics are said to be threaten users, fall madly in love with themor even claim that it can follow people through their webcam Definitely wild. At the top of the title are stories of Bing chatbots telling users “want to be humanas it said Digital Trends; or it could”feel or think thingsas it said Washington Posts.

These stories are even more confusing when people like second richest person in the world (and former investor in the creators of ChatGPT OpenAI) pauses to disrupt Twitter to say that these chatbots represent the world’s existential threat to humanity. “One of the biggest risks to the future of civilization is AI,” Elon Musk told the crowd on February 15 at the World Government Summit in Dubai, during a discussion on ChatGPT. . The fact that companies like Alphabet and Baidu pour billions of dollars into AI from companies like Alphabet and Baidu only stoked fears that an arms race of chatbots will completely transform the landscape of the internet and the way it works. media forever—into something definitely not positive.

A lot has happened with Microsoft’s new AI-powered Bing chatbot. Some of them are scary and most are confusing. However, it’s important to keep in mind: Much of what you’re hearing from the media about this is a bunch of hot innocent BS being put up for sale by people who should have known better.

No, Bing’s chatbot doesn’t love you. It doesn’t track you through your webcam. It doesn’t even really threaten you even if it makes a few scary sentences. And it’s certainly not perceptive despite what you might assume when reading certain headlines. It doesn’t do anything for the simple fact that it can’t. It’s operating the way it’s been trained to—and that means it’s not smart enough to do the things everyone is raving about.

“The software did not ‘love’ or ‘threat its users’, but in response to queries, the new chatbot provided answers,” said Irina Raicu, director of the Internet Ethics Program at the Markkula Center. oppose both. Applied Ethics at Santa Clara University, told The Daily Beast

Bots like ChatGPT and Bing’s new search engine are called the big language model (LLM). These are AIs that read and generate text. They are trained on huge datasets of text filtered from all over the internet; from news stories, to Wikipedia articles, to fan fiction databases. Almost anything available online is used to train these bots. The most advanced ones are capable of producing such complex and strange reactions that they easily pass the famous Turing Test and go straight into the realm of the weird.

However, another thing to understand about LLMs is that they are not really special. In fact, chances are you’ve used one recently. If you typed a text message or post on your phone and used the predictive text feature, you’re already on LLM. They’re just strings of code that make educated guesses about how best to respond to whatever you’re typing. In a word, they’re rudimentary—designed just to make educated stabs in conversation. They don’t think about what the words mean or what they might imply. They are simply trained to predict what the next word in the sentence is supposed to be and for each sentence after that. That is the cold reading of a fortune teller in the age of technology.

That’s not to say OpenAI’s LLMs aren’t impressive. They are certainly cutting edge and some of the weirdest chatbots ever made available to the public. But they probably shouldn’t have been made available to the public in the first place.

Let’s go back to 2016: Firstly time Microsoft launched an AI chatbot in the world, named Tay and designed to mimic a 19 year old American girl and learn through its conversations with Twitter users. In less than a day, Microsoft was forced to suspend the account after users started tweeting racist, sexist, and outright problematic messages about the account — which sent Tay bouncing back. generate similar thoughts.

in one blog post Explaining its decision, Microsoft said that in order to “do AI the right way, one needs to be iterative with many people and often on public forums. We have to be very careful to engage with each issue and ultimately learn and improve, step by step, and to do this without offending everyone in the process.”

Fast forward to the present day. It’s 2023, and Microsoft’s new Bing-related chatbot is not just rolling into the social network — users invited to use it so far are still posting incredible ‘conversations’, says Raicu. theirs on Twitter. It’s almost as if “Microsoft decided to reverse engineer Tay’s over-conformity and try again.”

It’s funny how a company like Microsoft can’t seem to learn from the big, fat, and problematic L they made in 2016 and go back to their chatbot plans—but that’s not the case. very surprising. After all, ChatGPT skyrocketed to stardom after being released by OpenAI in November 2022. Now, Microsoft, Google and others want to capitalize on the same success.

What’s more disturbing, however, is that those who should have known better also seem to have learned nothing from Bing’s new chatbot. It even heralded the birth of a new genre of writing: the “We spoke to an AI for this article” story, and here’s what it said. Hell, we even blame ourselves.

For the past week, though, that’s been all we’ve heard in tech news (among stories of spy balloons and UFOs). There were headlines after headlines and tweet after tweet about how badly or oddly the bot was acting. In one example, Kevin Roose, a seasoned tech columnist at New York Timesspent two hours talk to the Bing bot on topics from Jungian psychology, to its own existential feelings, to love. He was so shaken by the experience that he even lost sleep over it.

“However, I’m not exaggerating when I say my two-hour conversation with Sydney was the weirdest experience I’ve had with a piece of technology,” Roose wrote. “It worried me so deeply that I had trouble sleeping afterwards.”

He added that he worries that technology will eventually learn to “influence users, sometimes convincing them to act in destructive and harmful ways, and perhaps eventually develop the ability to commits its own dangerous acts.”

That is fair. But one could argue that it’s also completely untrue. EQUAL we wrote before, this is the case of a kind of digital pareidolia, the psychological phenomenon where you see faces and patterns without. If you spent hours “conversing” with a chatbot, you would think it was speaking back to you with meaning and purpose—even though, in reality, you were just talking to a glorified Magic 8 ball. or a fortune teller, ask it a question and see what it comes up with next.

There’s a lot to be afraid of when it comes to AI. They are known to be extremely biased, repeatedly bringing up cases of racism and sexism in things like LLM. The real danger is that users believe what they say no matter how ridiculous or vile. This danger is only exacerbated by people who claim that these chatbots are capable of things like perception and emotion, when in fact they cannot do any of them. They are bots. It cannot feel emotions such as love, hate, or happiness. It can only do what it’s told: Tell us what it thinks we want to hear.

Then again, if that’s the case, maybe they have more in common with us than we think.



Source by [author_name]

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button