Tech

See how biased AI image models are with these new tools

One theory as to why that might be, says Jernite, is that recently non-binary browns may have been more prominent in the press, meaning their image ends up in the dataset. data that AI models use to train, says Jernite.

OpenAI and Stability.AI, the company that built Stable Diffusion, say they’ve introduced fixes to mitigate ingrained biases in their systems, such as block certain reminders seem to have the potential to create offensive images. However, these new tools from Embracing Face show how limited these fixes are.

A Stability.AI spokesperson told us that the company trains its models on “data sets specific to different countries and cultures,” adding that this will “ serves to minimize biases due to over-expression in general data sets”.

An OpenAI spokesperson didn’t comment specifically on the tools, but gave us a blog post explains how the company added various techniques to DALL-E 2 to filter out biased, sexual, and violent images.

The trend is becoming a more pressing issue as these AI models are adopted more widely and produce more realistic images than ever before. They have been launched in a wide range of products, such as stock images. Luccioni says she worries that the models risk reinforcing harmful biases on a large scale. She hopes the tools she and her team have created will bring more transparency to AI systems that generate images, and emphasizes the importance of making them less biased.

Part of the problem, said Aylin Caliskan, an associate professor at the University of Washington, is that these models are trained on data that is primarily US-focused, which means they mostly reflect associations, prejudices, values, and culture of the United States. bias research in AI systems and did not participate in this study.

“What ended up happening was this fingerprint of American culture online…it exists all over the world,” says Caliskan.

Caliskan says Hugging Face’s tools will help AI developers better understand and reduce biases in their AI models. “When people see these examples firsthand, I believe they will be able to better understand the significance of these biases,” she said.



Source by [author_name]

news7h

News7h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button