Home

Donate
Perspective

How US Firms Are Weakening the EU AI Code of Practice

Paul Nemitz, Amin Oueslati / Jun 30, 2025

The EU AI Act presents the first comprehensive framework for governing AI. For the most powerful models, so-called general-purpose AI (GPAI), a Code of Practice is currently being drafted to streamline the compliance of a small number of leading AI companies. The Code is developed in four expert-chaired working groups, involving nearly 1,000 stakeholders from industry, civil society, and academia, following an iterative drafting process that began in September 2024. The final text is expected to be published by August 2025.

As the process is approaching its end, the European Commission has given privileged access to a small number of leading US companies who advocate for a watered-down Code. Not only does this undermine the process’s legitimacy, but it also goes against the AI Act’s intent to serve the interests of European citizens. Most importantly, such intense lobbying contradicts the companies’ own claims of acting in the public interest.

An inclusive process, but for whom?

The Code of Practice has been drafted in an unparalleled process, under the helm of 13 leading scientists and involving over 1,000 stakeholders from academia, civil society, and industry. From the start, GPAI providers were given a special seat at the table. However, as the Code draws to a close, the key question is whether the small set of leading US companies–which have been privileged in the process through bilateral meetings and granted exclusive access to the latest version of the text–accept that rules for GPAI are a matter of public interest and cannot be set by them alone. By pressuring the European Commission to prioritize the interests of a small number of US companies over the perspectives of 1,000 stakeholders, these companies put the entire process at risk and lose credibility as actors of public interest and good corporate citizens of the world.

In this move, the industry mistakenly confounds weak regulation with innovation, trying to benefit from the Commission’s recent impetus to position the EU as a global leader in AI. This logic is flawed, as pointed out by Arthur Mensch, CEO of Mistral AI. In an interview earlier this year, he argued that the key problem in Europe is not regulation, but rather market fragmentation and a lack of AI adoption. But not only are US companies lobbying for a weakened Code, the AI rulebook has also drawn criticism from US President Trump. Taken together, these dynamics help explain why some in the EU view Big Tech's support for the Code as a vote on the European Commission's political agenda.

The Code has become overly politicized

In an effort to portray an innovation-friendly stance and calm transatlantic woes, some in the EU have made Big Tech signatures the currency of the Code’s success. Such logic is detrimental to the Code’s true objective, as it allows providers to exploit not signing as a pressuring tactic to steer and dilute the Code. Moreover, US companies have used non-signing to signal their support for the US government, which has become increasingly opposed to European digital regulation. Meta announced in February 2025 that it would not sign the Code, months before the text was finalised, exemplifying the detachment between signatures and the Code’s substance.

Rather than serving as a political token, the Code is meant to be a technical tool for assisting compliance. What happens when providers fail to adhere to the Code? They must opt for alternative means of compliance, using comparable assessment frameworks or mitigation procedures, which they must show to be equally effective at achieving the requirements of the AI Act. However, such a decision will come at a cost. Whereas the Code represents a straightforward way of adhering to the AI Act, choosing alternative means of compliance requires extensive effort to evaluate and support their sufficiency, as the European Commission already pointed out (see section 3.6).

Because the AI Act is binding European law, companies that want access to the European market will have to comply. But if Big Tech does not adhere to the Code, or if the Commission does not give the Code general validity, two things will happen. First, the European Commission will have to follow the letter of the AI Act and propose alternative rules. Second, civil liability risks under US law for US companies relating to AI will become even bigger, as not implementing the Code can be considered a lack of reasonable care and even negligence in light of the huge risks GPAI creates worldwide. The “we don’t care” attitude that comes with not signing up to the Code will eventually be sanctioned with punitive damages by judges under US tort law. That is particularly true for those cases where AI harms could have been avoided with reasonable care by signing up and complying with the Code.

Complain, then comply strategy

A core function of regulation is to better align the behaviour of profit-seeking companies with the public interest, setting a standard of care. In response, companies often default to knee-jerk reactions, proclaiming such rules as unfeasible. Initially, Google claimed that it could not handle takedowns based on the right to erasure (also known as the right to be forgotten under the GDPR). At the same time, it now smoothly removes hundreds of thousands of pieces of content each year in response to deletion requests. Car manufacturers also have a long history of arguing that the regulator has gone too far by further lowering emission targets, only to comply with them shortly after.

The European Commission must not fall prey to such corporate lobbying tactics. Rarely will a GPAI model provider express excitement over new rules. Some will consider them overly prescriptive, while others will argue that they are too vague. But instead of giving in, the Commission must ensure that the Code reflects the intent of the AI Act, safeguarding the interests and rights of the European people, and setting a standard of reasonable care. In the European Parliament, a special committee has been set up to follow the implementation of the AI Act, thus signalling that the legislator will not tolerate non-enforcement of the Act.

Once new rules are in place, companies respond by innovating. For GPAI, this will mean investing in safer, more transparent, and trustworthy models. It seems likely that providers will ultimately benefit from these investments in the medium term. With the creation of a standard for reasonable care, GPAI risks become insurable. Moreover, lack of safety, reliability, and trust are recognized as core obstacles to European AI adoption, a view shared by Europe’s industrial champions and A16z, a leading VC fund, alike. In this regard, a strong Code of Practice may unlock new growth for GPAI model providers in Europe, while accelerating European AI innovation.

Resist the pressure

The European Commission cannot ignore its duty to ensure the Code of Practice honors the spirit of the AI Act, as agreed upon by the co-legislators. This means protecting the rights of European citizens and the public interest.

A core innovation of the Code is its inclusive drafting process, involving over 1,000 stakeholders across academia, civil society, and the private sector. If they see their efforts and extensive engagement in the public interest collapse to the preferences and profits of a small number of leading AI companies, significant damage will be done to civic engagement and democracy in the EU.

Lastly, the Commission shall not underplay its hand: it can adopt the Code, potentially in a more rigorous version, even without the signatures of the companies concerned. This would make the measures in the Code the official way for assessing GPAI compliance with the AI Act. Non-signatories will be compelled to comply in any case, if they wish to access the European market, as well as in other jurisdictions, given the global standard of care established by the Code.

Authors

Paul Nemitz
Paul F. Nemitz is a visiting Professor of Law at the College of Europe in Bruges, where he teaches a postgraduate seminar on AI Law and Data Protection Law. He retired in 2025 as Principal Advisor and Director of the European Commission. He was the Director responsible for Fundamental Rights and Uni...
Amin Oueslati
Amin is a Senior Associate at The Future Society, where he focuses on European AI Governance and the implementation of the EU AI Act, particularly with regard to general-purpose AI. Through past research efforts, he brings expertise specifically on auditing regimes, model evaluations, and regulatory...

Related

Navigating Europe’s AI Code of Practice Before the Clock Runs OutFebruary 14, 2025
Podcast
Addressing Questions Over Europe's AI Act, Digital Sovereignty, and MoreJune 15, 2025
Analysis
Europe’s Deregulatory Turn Puts the AI Act at RiskJune 3, 2025
Perspective
The EU AI Continent Action Plan: Hype, Burn, Rinse, and RepeatApril 10, 2025

Topics