Home

Can Europe’s Laws Keep Up with AI’s New Arms Race?

Ben Lennett / Jun 15, 2023

Ben Lennett, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy, is an editor at Tech Policy Press.

Note: Originally published on May 16, 2023, this piece has been updated following the passage of amendments to the AI Act by the European Parliament on June 14, 2023.

A full two years before the introduction of ChatGPT jarred public debate around the harms of big tech, Europe was already working out a legal framework to regulate the use of artificial intelligence (AI) technologies. The AI Act proposes a risk-based approach to regulation, focused on identifying potentially harmful uses of AI systems and placing requirements and obligations on companies to take steps to minimize the risk to the public.

A presentation from the European Commission visualized the AI Act’s regulatory structure as a pyramid with a small handful of banned uses at the top. These uses, such as social scoring or predictive policing, pose an unacceptable risk to the public and are therefore prohibited. One level down, high-risk uses, including medical devices and uses of AI in essential government services, are permitted but with requirements to establish and implement risk management processes. Further down, lower-risk uses like consumer-facing services are allowed, but subject to transparency obligations, including notifying users they are interacting with an AI system and labeling deepfakes. And finally, at the bottom, minimal or no-risk uses are permitted with no restrictions.

It is a prudent approach that recognizes that AI is a set of different technologies, tools, and methods that can be either utilized to benefit the public or intentionally or unintentionally implemented in a manner that creates significant harm. But generative AI products like ChatGPT may have exposed a flaw in the European approach. As a February essay in Internet Policy Review argued, ChatGPT fundamentally differs from the ‘traditional’ AI systems the Act was initially written to cover. “Generative AI systems are not built for a specific context or conditions of use, and their openness and ease of control allow for unprecedented scale of use.”

Fortunately for Europeans, the law still needed to be finalized, and members of the European Parliament raced to adjust the law to the new reality during the past few months. EU lawmakers added several amendments to the Commission’s earlier proposal to address the risks of generative AI in particular.

According to a release from the European Parliament, generative AI companies will need to “assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.” Moreover, generative foundation models, like GPT, will have to comply “with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.” Finally, the Act now categorizes very large online platforms’ recommender systems that use AI as high-risk use cases.

Beyond these recent changes, it’s also important to recognize the AI Act does not exist in a vacuum. Europe has the General Data Protection Regulation (GDPR), which, although imperfect, did serve as a mechanism for Italy to compel OpenAI to make changes to ChatGPT to protect user privacy, including allowing users to object to the company’s use of their data to train the system. In addition, Europe is moving ahead with the enforcement of the Digital Services Act (DSA), which has special rules for very large online platforms that include producing annual assessments of the risks for online harms on their services. Google Search and Microsoft’s Bing are already subject to those rules, and given ChatGPT’s popularity, it could very well end up with that designation if its user base crosses a threshold of 45 million users.

In parallel to the AI Act, the European Commission also proposed “a targeted harmonization of national liability rules for AI, making it easier for victims of AI-related damage to get compensation.” In this way, the AI Act is an ex-ante regulation intended to reduce harmful outcomes of artificial intelligence before they are available. In contrast, liability and the courts can serve as a means to hold companies accountable that fail to take appropriate care in developing or deploying such technologies.

The Act still has several steps to go through before it becomes law, including trilateral negotiations between EU member countries, the Parliament and the European Commission. Tech Policy Press will continue to update this page as the legislation is updated and revised versions are made public.

Updates

May 9, 2023 - European Parliament considers draft amendments to the law.

June 14, 2023 - The European Parliament passes an amended version of the AI Act. Talks will now begin with EU countries in the European Council on the final form of the law.

Authors

Ben Lennett
Ben Lennett is a contributing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Techn...

Topics