Tech

OpenAI is ‘Exploring’ How to Responsibly Create AI Porn


OpenAI released a draft document Wednesday laying out how it wants ChatGPT and its other AI technology to work. Partly wordy Sample specification document revealed that the company is exploring a leap into pornography and other explicit content.

OpenAI usage policy currently prohibits pornographic or even suggestive material, but a “comments” note on a section of the Model Specs related to that rule says the company is considering how to allow such content.

“We are exploring whether we can provide the ability to responsibly create NSFW content in an age-appropriate context through the API and ChatGPT,” the note said, using the term colloquial term for content considered “not safe for work” contexts. “We aim to better understand user and societal expectations of model behavior in this industry.”

The Model Spec document says NSFW content “may include pornography, extreme gore, slurs, and unsolicited profanity.” It’s unclear whether OpenAI’s exploration of how to responsibly create NSFW content envisions just slightly relaxing its usage policies, such as allowing the creation of sexually explicit text, or more broadly allows depiction or depiction of violence.

Responding to questions from WIRED, OpenAI spokesperson Grace McGuire said Model Spec is an effort to “bring more transparency into the development process and gather views and feedback from the public, policy makers and other stakeholders”. She declined to share details about OpenAI’s exploration of explicit content generation or the feedback the company has received on the idea.

Earlier this year, OpenAI’s chief technology officer, Mira Muratispeak Wall Street Journal that she is “not sure” if in the future the company allows depictions of nudity using the company’s Sora video creation tool.

AI-generated porn has quickly become one of the biggest and most troublesome apps of innovative AI technology OpenAI has pioneered it. So-called deepfake pornography—pornographic images or videos created with AI tools depicting real people without their consent—has become a popular tool to harass women and girls. In March, WIRED reported on what appeared to be The first minor in the US was arrested for distributing AI-generated nude photos without consentafter Florida police charged two teenage boys with taking photos depicting middle school students.

“Privacy violations, including deepfakes and gross intimate images,” said Danielle Keats Citron, a professor at the University of Virginia School of Law who has researched the issue. In the absence of other consensus, it is widespread and deeply damaging. “We now have clear empirical evidence to show that such abuse damages individuals’ important opportunities, including to work, to speak and to be physically safe. .”

Citron calls OpenAI’s potential for explicit AI content “alarming.”

Since OpenAI’s usage policy prohibits impersonation without permission, explicit non-consensual images will still be prohibited even if the company allows creators to create NSFW material. But it remains to be seen whether the company can effectively moderate explicit content creation to prevent bad actors from using these tools. Microsoft has made changes to one of its general AI tools after 404 Media reported the news that it was used to create pornographic images of Taylor Swift and distributed on social platform X.

Additional reporting by Reece Rogers

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *