Entertainment

If Google kills the news media, who will feed the AI ​​beast?


One of the biggest concerns with the rise of these AI CliffsNotes products is how prone they are to making mistakes. You can easily see how AI summaries, without human intervention, can not only provide inaccurate information but sometimes dangerously wrong results. For example, to answer a search query asking why cheese doesn’t stick to pizza, Google’s AI propose that you should add “1/8 cup of non-toxic glue to the sauce to make it more pliable.” (Userapply ice or heat the wound,” this will have the effect of saving your life as well as praying and hoping for the best. Other search queries have resulted in completely inaccurate information, such as one asking which principal attended the University of Wisconsin—Madison and Google explains that President Andrew Jackson attended college there year 2005even though he died 160 years earlier, in 1845.

On Thursday, Google said in a blog post that it was scaling back some summary results in certain areas and was working to fix problems it had seen. “We have been vigilant in monitoring external feedback and reports, and have taken action on a small number of AI Overviews that violate content policies,” Liz ReidHead of Google Search, wrote above The company’s web site. “This means that the overviews contain information that is potentially harmful, obscene or infringing.”

Google has also tried to allay publishers’ concerns. In another post last month, Reid Written that the company saw “links included in AI Overview receive more clicks than when the page appeared as a traditional web listing for that query” and that as Google expanded “this experience , we will continue to focus on sending valuable traffic to publishers and creators.”

While AI can retrieve the truth, it lacks the human understanding and context needed for truly insightful analysis. Oversimplification and the potential misrepresentation of complex issues in AI briefs can stifle public discussion and lead to the dangerous spread of misinformation. This does not mean that humans do not have that ability. If there’s anything social media over the past decade has taught us, it’s that humans are more likely to spread misinformation and prioritize their biases over the truth. However, as AI-generated summaries become increasingly popular, even those who still value nuanced, well-researched journalism may find it increasingly difficult to access such content. If the economics of the news industry continue to deteriorate, it may be too late to prevent AI from becoming the primary gatekeeper of information, with all the risks that entails.

The news industry’s response to this threat has been mixed. Several outlets have sued OpenAI for copyright infringement—like New York Times do in December—while others have decided to do business with them. This week Atlantic and Vox became the newest news organization sign The licensing agreement with OpenAI, allows the company to use their content to train AI models, which can be thought of as training robots to take on work even faster. Media giants such as News Corp, Axel Springer and Associated Press already done on the train. However, proving that it is not beholden to any mechanical ruler, Atlantic published a story in the media “The Devil’s Deal” with OpenAI on the same day as its CEO, Nicholas Thompson, announced their partnership.

Another investor I spoke with likened the situation to a scene from inside Tom Stoppard‘S Arcadia, in which a character remarks that if someone stirs jam into their porridge by turning it in one direction, they cannot reconstitute the jam by stirring it the opposite way. “The same is true for all of these summary products,” the investor continued. “Even if you tell them you don’t want them to make your posts shorter, that doesn’t mean you can leave your content out of them.”

But here is the question I have. Let’s say Google, OpenAI and Facebook are successful and we read news summaries instead of actual news. Eventually, those news organizations will go out of business and then who will create the content they need to summarize? Or maybe then it won’t matter anymore because we’ll be lazy and obsessed with shorter content so the AI ​​will choose to summarize everything into a single word, like Irtnog.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *