Tech

Elon Musk’s criticism of ‘woke AI’ suggests ChatGPT could be a target of the Trump administration


Mittelsteadt added that Trump could punish companies in a variety of ways. As an example, he cited how the Trump administration canceled a major federal contract with Amazon Web Services, a decision that may have been influenced by the former president’s views on the Washington Post and its owner, Jeff Bezos.

It won’t be difficult for policymakers to show evidence of political bias in AI models, even if it cuts both ways.

ONE research in 2023 by researchers at the University of Washington, Carnegie Mellon University and Xi’an Jiaotong University found a range of political leanings in different major language models. It also shows how this bias can affect the performance of systems that detect hate speech or misinformation.

Another studyconducted by researchers at the Hong Kong University of Science and Technology, discovered bias in several open-source AI models on polarizing issues such as immigration, reproductive rights, and transformation climate. Yejin Bang, a doctoral student who participated in the study, said that most models tend to be liberal and US-centric, but similar models can exhibit a variety of biases. liberal or conservative depending on the topic.

AI models capture political biases because they are trained on reams of internet data that inevitably includes all types of viewpoints. Most users may not be aware of any bias in the tools they use because the models incorporate protections that limit them from creating certain harmful or biased content. certain. However, these biases can leak out subtly, and the additional training models receive to limit their output can create further partisanship . “Developers can ensure that models are exposed to multiple perspectives on divisive topics, allowing them to respond with a balanced perspective,” Bang said.

The problem could get worse as AI systems become more widespread Ashique KhudaBukhsha computer scientist at the Rochester Institute of Technology who developed a tool called the Malicious Rabbit Hole Framework, which helps find various social biases of large language models. “We are concerned that a vicious cycle is about to begin as new generations of LLMs will increasingly be trained on data contaminated by AI-generated content,” he said.

“I believe that bias in the LLM is already a problem and very It may get even bigger in the future.” to German politics.

Rettenberger suggests that political groups may also seek to influence LLM to promote their own views above those of others. “If someone is very ambitious and has bad intentions, it is possible to steer the LLM in certain directions,” he said. “I see manipulation of training data as a real danger.”

There have been several attempts to change the biased balance in AI models. Last March, one programmers have developed a chatbot that leans more right-wing in an effort to highlight the subtle biases he sees in tools like ChatGPT. Musk himself has promised to make Grok, the AI ​​chatbot built by xAI, capable of “maximum truth seeking” and less biased than other AI tools, despite the fact that it also makes fences when it comes to complex political questions. (As a staunch Trump supporter and immigration hawk, Musk’s own stance on being “less biased” could also lead to more right-wing outcomes .)

Next week’s US election is unlikely to heal the rift between Democrats and Republicans, but if Trump wins, talk of anti-woke AI could become a lot louder.

Musk took an apocalyptic view on the issue at this week’s event, referring to an incident where Google’s Gemini said nuclear war would be better than misgendering Caitlyn Jenner. “If you had an AI programmed for such things, it might conclude that the best way to ensure that no one is misgendered is to exterminate all humans, thereby making sure that no one is misgendered,” he said. The probability of misgendering in the future is zero.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *