Tech

A lawsuit against confusion calls for the fake news illusion


Awkward did not respond to a request for comment.

In an emailed statement to WIRED, News Corp chief executive Robert Thomson compared Perplexity unfavorably to OpenAI. “We welcome principled companies like OpenAI, which understand that integrity and innovation are essential if we are to realize the potential of Artificial Intelligence,” the statement said. “Perplexity is not the only AI company that abuses intellectual property, nor is it the only AI company that we will pursue vigorously and rigorously. We have made it clear that we would rather flirt than sue, but for the sake of our journalists, writers and companies, we must challenge the regime of content theft.”

However, OpenAI is facing accusations of diluting its own brand. IN New York Times sues OpenAITimes accusation that ChatGPT and Bing Chat would attribute fabricated quotes to the Times, and accused OpenAI and Microsoft of damaging their reputations through brand dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it was not. that’s right; The Times argues that it actual report has dismissed claims about the health benefits of moderate alcohol consumption.

“Copying articles to operate alternative, commercial AI products is illegal, as we made clear in our letter to Perplexity,” said Charlie Stadtlander, NYT director of external communications. and the lawsuit against Microsoft and OpenAI.” “We welcome this lawsuit by Dow Jones and the New York Post, which is an important step toward ensuring that publishers’ content is protected from this type of misappropriation.”

According to Matthew Sag, a professor of law and artificial intelligence at Emory University, if publishers prevail arguing that hallucinations may violate trademark law, AI companies could face “incredible difficulty.” ”.

“It is absolutely impossible to guarantee that the language model will not cause hallucinations,” Sag said. In his view, the way language models work by predicting the correct sounding words according to prompts is always a kind of illusion – sometimes it sounds more plausible than others.

“We only call it an illusion if it doesn’t match our reality, but the process is exactly the same whether we like the output or not.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *