Google just launched Bard, its answer to ChatGPT—and it wants you to make it better
Google has a lot to offer in this launch. Microsoft has partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google made the mistake of first trying to respond. In a Bard trailer the company released in February, the chatbot made an actual mistake. Value of Google $100 billion down overnight.
Google won’t share many details about how Bard works: big language model, the technology behind this wave of chatbots, has become valuable intellectual property. But it will say that Bard is built on top of a new version of LaMDA, Google’s flagship major language model. Google says it will update the Bard as the underlying technology improves. Like ChatGPT and GPT-4Bard is fine-tuned using reinforcement learning from human feedbacka technique that trains a large language model to provide more usefulness and less toxic feedback.
Google has been working on Bard for a few months behind closed doors but says it’s still an experiment. The company is currently offering the chatbot for free to people in the US and UK who sign up for the waitlist. These early users will help test and improve the technology. “We will receive user feedback, and we will enhance it over time based on that feedback,” Google said. Research vice president, Zoubin Ghahramani. “We are aware of all the things that can happen with large language models.”
But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and a former co-leader of Google’s AI ethics team, is skeptical of this stereotype. She says that Google has been working on LaMDA for many years, and she thinks introducing Bard as an experiment “is a PR trick larger companies use to reach millions of concurrent customers.” remove accountability if anything goes wrong.”
Google wants users to think of Bard as a companion to Google Search, not a replacement. A button located below Bard’s chat widget says “Google It”. The idea is to drive users to Google Search to check out Bard’s answer or learn more. “It was one of the things that helped us overcome the limitations of technology,” says Krawczyk.
“We really want to encourage people to really explore other places, sort of confirm things if they’re not sure,” Ghahramani said.
This admission of error by Bard has also shaped the design of the chatbot in other ways. A user can only interact with Bard a number of times during any given session. This is because the longer large language models engage in a conversation, the more likely they are to go astray. For instance, more bizarre responses from Bing Chat that people shared online appeared at the end of drawn-out exchanges.
Google won’t confirm what the conversation limit will be at launch, but it will be set fairly low for the initial release and adjusted depending on user feedback.
Google is also playing safe in terms of content. Users will not be able to request pornographic, illegal or harmful material (as judged by Google) or personal information. In my demo, Bard wouldn’t give me advice on how to make a Molotov cocktail. That is the norm for this generation of chatbots. But it also won’t provide any medical information, such as how to detect signs of cancer. “Bard is not a doctor. It will not give medical advice,” Krawczyk said.
Perhaps the biggest difference between Bard and ChatGPT is that Bard creates three versions of each response, which Google calls a “draft”. Users can click between them and choose the response they like, or mix and match between them. The purpose is to remind everyone that Bard can’t come up with a perfect answer. “There is a sense of authority when you only see one example,” says Krawczyk. “And we know there are limitations around practicality.”