Meta has built a huge new language AI — and it’s free
Pineau has helped change the way research is published in some of the biggest conferences, introducing checklists of what researchers must submit with their results, including code and details on how to do it. do experiments. Since joining Meta (then Facebook) in 2017, she’s championed that culture in its AI lab.
“Commitment to open science is why I am here,” she said. “I wouldn’t be here on any other terms.”
Ultimately, Pineau wants to change the way we judge AI. “What we call state-of-the-art today can’t just be about performance,” she said. “It also has to be the most advanced work in terms of responsibility.”
However, giving away a large language model is a bold move by Meta. “I can’t tell you there’s no risk that this model creates language we’re not proud of,” says Pineau. “It will be.”
Consider the risk
Margaret Mitchell, one of the AI ethics researchers that Google was forced to leave in 2020, now works at Hugging Face, sees the OPT release as a positive move. But she thinks there are limits to transparency. Has the language model been tested with rigor? Do the foreseeable benefits outweigh the foreseeable harms — such as the creation of misinformation, or racist language and misperceptions?
“The release of a major language model to a world where a wide audience is likely to use it or be influenced by its output,” she said. Mitchell notes that this model should be able to generate harmful content not only by itself, but also through downstream applications the researchers built on top of it.
According to Pineau, Meta AI checks for OPT to weed out some harmful behaviors, but the aim is to come up with a model that researchers can learn from, warts and all.
“There’s been a lot of talk about how to do that in a way that allows us to sleep at night, knowing that there’s zero risk in terms of reputation, zero risk in terms of harm,” she said. She rejects the idea that you shouldn’t release a model because it’s too dangerous — which is OpenAI’s reason for not releasing GPT-3’s predecessor, GPT-2. “I understand the weaknesses of these models, but that is not the research mindset,” she says.