The Meta team behind Galactica argues that language models are better than search engines. “We believe this will be the next interface for how humans access scientific knowledge,” the researchers said. write.
This is because language models can be “capable of storing, combining, and inferring about” information. But that “potentially” is very important. It’s a coded admission that language models can’t do all of this yet. And they may never be able to.
“Language models don’t really know beyond their ability to capture word string patterns and output them in a probabilistic way,” says Shah. “It gives a false sense of intelligence.”
Gary Marcus, a cognitive scientist at New York University and a vocal critic of deep learning, give your opinion in a Substack post titled “A Few Words on Bullshit,” states that the ability of large language models to mimic human-written text is nothing more than “a feat of hierarchical statistics.” best”.
However, Meta is not the only company to support the idea that language models can replace search engines. Over the past few years, Google has promoted its PaLM language model as a way to look up information.
It’s a tantalizing idea. But to suggest that the human-like text such models generate will always contain reliable information, as Meta seems to have done when advertising Galactica, is reckless and irresponsible. It’s an optional error.