What about outside the EU?
GDPR, the EU’s data protection regulation, is the bloc’s best-known technology export regulation, and it has been copied everywhere from California to India.
The approach to AI that the EU has taken, targeting the riskiest AI, is one that most developed countries agree on. If Europeans can create a unified way to regulate technology, it could act as a template for other countries hoping to do the same.
“U.S. companies that comply with the EU AI Act, will also raise the bar,” said Marc Rotenberg, head of the Center for AI and Digital Policy, a nonprofit dedicated to tracking AI. them to American consumers in terms of transparency and accountability. policy.
The bill is also being closely watched by the Biden administration. The United States is home to some of the largest AI labs in the world, such as Google, Meta, and OpenAI AI labs, and leads various global labs charts in AI research, so the White House wants to know any regulations that might apply to these companies. Currently, influential figures in the US government such as National Security Adviser Jake Sullivan, Commerce Secretary Gina Raimondo and Lynne Parker, who are leading the White House AI effort, have welcomed the effort. of Europe in regulating AI.
“This was a stark contrast to the way the United States viewed the development of GDPR,” said Rotenberg, “which at the time Americans said would end the internet, make solar eclipses, and end the live on the planet as we know it.
Despite some unavoidable caution, the United States has good reason to welcome the legislation. It is deeply concerned about China’s growing influence in the technology sector. For the US, the official position is that maintaining Western dominance in technology is a matter of whether “democratic values” prevail. It wants to keep the EU, a “like-minded ally“Exit.
What are the biggest challenges?
Currently, some of the bill’s requirements are technically not compliant. The first draft of the bill requires data sets that are error-free and that humans can “fully understand” how AI systems work. The datasets used to train AI systems are vast, and human testing to see them completely error-free would require thousands of hours of work, if such verification is feasible. And today’s neural networks are so complex that even their creators don’t fully understand how they came to their conclusions.
Tech companies are also annoyed about the requirements to give external auditors or regulators access to their source code and algorithms for law enforcement.