India is drafting rules to detect and limit the spread of deepfake content and other harmful AI media, a senior lawmaker said Thursday, following reports of proliferation of such content on social media platforms in recent weeks.
Ashwini Vaishnaw, India’s IT Minister, said the ministry held meetings with all large social media companies, industry body Nasscom and academics earlier in the day and has reached a consensus that a regulation is needed to better combat spread of deepfake videos as well as apps that facilitate their creations.
“The companies share our concerns and they understood that it’s [deepfakes] are not free speech. They understood that it’s something that’s very harmful to the society,” he said. “They understood the need for much heavier regulation on this, so we agree that we will start drafting the regulation today itself.”
The ministry will be ready with “clear actionable items” on how to combat deepfakes in 10 days, he said, adding that New Delhi is also evaluating a monetary fine on those who don’t comply and accountability on individuals who are creating such videos. The social media companies will do a follow-up meeting with the ministry early December on this issue, he said.
Deepfakes are synthetically generated media, often using AI, to realistically replace a person’s likeness or voice. Though sometimes entertaining, ethical concerns abound regarding consent and potential misinformation. The IT ministry’s move follows the Indian Prime Minister Narendra Modi expressing concerns about deepfake videos last week.
“The deepfakes can spread significantly more rapidly without any checks and they are getting virals within minutes of their uploading. That’s why we need to take some very urgent steps to strengthen trust in the society and to protect our democracy,” said Vaishnaw at a press conference, where he recounted an incident where a deepfake video presented a prominent Indian minister appealing to the citizens to vote for the opposition party.
The new regulation will also focus on strengthening the reporting mechanisms for individuals to report such videos, and for proactive and timely actions by social media companies, said Vaishnaw.
The actions “need to be more proactive because the damage can be very immediate,” he said, adding that even an action “hours” after the reporting might not be sufficient.