Home

Donate

Navigating Europe’s AI Code of Practice Before the Clock Runs Out

Anda Bologa / Feb 14, 2025

Europe’s second stab at a Code of Practice for general-purpose AI pledges clarity but may bury innovation under a fresh pile of mandates, risking even deeper confusion, writes Anda Bologa.

A photo at the European Commission in Brussels. Shutterstock

On February 2, new provisions of Europe’s AI Act came into force, banning facial recognition systems and other high-risk AI systems. These provisions ushered in a sweeping set of rules designed to increase transparency and accountability for advanced AI systems.

To make compliance more manageable, the EU Commission has drafted a Code of Practice for general-purpose AI. The Code aims to translate the Act’s broader principles into step-by-step guidance, helping developers align their models with Europe’s regulatory standards while still encouraging the innovation needed to boost competitiveness and economic dynamism. However, the question remains whether the Code, in its second draft, delivers on that promise or instead becomes an unwieldy set of prescriptions that steers businesses away from AI altogether.

In just a few months, the AI Office assembled nearly a thousand participants across four working groups, hosted a kick-off Plenary on September 30th, and unveiled a second draft on December 19th. They intend to finalize the Code by April 2025 – just months before Europe’s AI rules come into force in August. Participants have pointed to a jam-packed schedule that offers little time for deeper debate or thorough revisions. The Chairs acknowledge they cannot integrate all comments, raising concerns that the final text may be fragmented. Additionally, the latest draft is heavier on metrics and obligations for everything from training data to hardware, prompting concerns that a hurried, highly prescriptive framework will smother innovation instead of guiding it.

The Code of Practice for general-purpose AI demonstrates a sincere effort to get the details right. Yet, in a rush to cover every contingency, it risks overlooking the bigger picture: spurring the next generation of AI-driven breakthroughs that can speed up drug discovery, modernize public services, and let small farmers use new predictive tools for planting and harvesting. Innovation is a delicate process, especially in emerging areas like large-scale language models or real-time climate analytics. Europe possesses the scientific expertise and market size to shape a future where these tools become transformative assets in every corner of the continent. But that future hinges on how carefully policymakers, industry players, and civil society calibrate the rules.

The Code arrives at a pivotal moment. Europe has struggled to capitalize on digital opportunities. In 2023, only 8% of European businesses use AI, compared to about 30% of large firms and a mere 7% of SMEs. Even Europe’s best performers — Denmark and Finland — reach a modest 15% on AI use, while Romania and Bulgaria languish at 2% and 4%, respectively.

These numbers underscore a much bigger challenge. Most of Europe’s enterprises remain small or mid-sized, accounting for over half of the continent’s GDP and employing over 88 million people. Research suggests that smaller businesses can see gains of nearly 10% when they successfully adopt digital platforms. Despite this potential, only 30% of SMEs have even moderate digital capabilities. A rigid Code risks pushing small firms even further away from the advanced tools Europe needs.

Europe’s AI revolution will not happen on autopilot. Real progress demands revamping processes, investing in talent, and scaling up what works. The public sector must also move faster if Europe is to modernize healthcare, education, and core government services. Tangled or rigid rules risk derailing Europe’s ambitions. Europe’s digital regulations already weigh heavily on businesses. Over the past 25 years, the number of economy-wide laws doubled, and the EU has rolled out close to 100 tech-focused laws. High-minded ideals often mix with fragmented enforcement and overlapping rules.

The General Data Protection Regulation (GDPR) is a cautionary tale. While it advanced important privacy principles, it also precipitated an 8% dip in profits and a 2% drop in sales among covered firms, with small tech companies feeling double that sting. Research suggests nearly a third of certain apps vanished from EU markets post-GDPR, reducing consumer choice and competition. A new generation of digital policies like the Digital Markets Act and the Digital Services Act is expected to create annual compliance costs of up to €71 billion, nearly half borne by SMEs. On top of that, several big tech companies have held back product features in Europe because of the uncertain compliance environment – delays that, in turn, limit the availability of sophisticated tools for small restaurants, travel agencies, and mom-and-pop shops.

The second draft of the AI Code of Practice appears to run down a similar path. It demands far more detailed disclosures and references to multiple upcoming standards not yet defined. This approach might work for a global player with teams of lawyers on retainer, but it raises anxiety for the typical European enterprise. Many worry that practical oversight – targeting genuine risks and anchored in proven guidelines – will get lost in a tangle of demands. Meanwhile, vital decisions, such as how to classify “systemic risk” or how to measure model emissions, remain unclear. Providers could end up spending scarce resources trying to comply with evolving targets.

When the European Parliament was discussing the AI Act draft, ChatGPT burst onto the scene and changed the debate around large-scale language models overnight. Now, deep search is triggering a similar jolt: the law’s carefully calculated 10^25 FLOPs threshold for “systemic risk” already looks outdated, with below-benchmark systems matching top-tier capabilities. This recurring disconnect – meticulously planned rules overshadowed by rapid innovation – reveals Europe’s ongoing struggle to keep pace. At today’s speed of AI, Brussels’s carefully orchestrated frameworks risk becoming footnotes while pioneers race ahead.

As part of its effort to boost the EU’s global standing, the Commission unveiled the Competitiveness Compass to unify Europe’s approach to digital growth and AI deployment. The Compass promises simpler regulations, stronger infrastructure, and more funding for innovators, yet its goals span everything from streamlining administrative procedures to building cross-border partnerships without clear timelines or metrics. Key questions about energy policies and financing for local AI ventures remain vague. Unless the Compass sets sharper targets and acts quickly, it risks becoming just another broad directive that falls short of giving Europe’s AI sector the momentum it urgently needs.

That said, Europe is hardly devoid of strengths. The region has a large pool of STEM graduates and roughly the same number of software developers as the US. It leads in scientific research on topics ranging from quantum mechanics to climate modeling, areas where AI can drive breakthroughs. Calls for a calmer approach to regulation – centered on validated risk-benefit analysis, better alignment with existing law, and post-implementation assessments – could offer a path forward. The idea is to make sure that the promise of productivity gains and job augmentation isn’t lost in a maze of compliance.

Europe also needs a cultural reset around failure. Across the Atlantic, bankruptcy is often treated as a badge of entrepreneurial grit rather than a moral failing. That difference in mindset frees American startups to take bigger risks and bounce back when things go wrong. Meanwhile, strict bankruptcy rules and inconsistent tax breaks on employee stock options hold European entrepreneurs back. Loosening these constraints wouldn’t just help individual founders – it could encourage a more daring approach to innovation overall.

Even if Europe rethinks its attitude toward failure, the region must still confront a daunting venture capital gap. Of the 53 global unicorn startups valued at over $10 billion, only two call the EU home. Investors remain hesitant to deploy large sums in nascent AI ventures, partly due to fragmented markets and complex regulatory requirements across Member States. Late-stage funding is particularly scarce, limiting startups’ ability to grow into global contenders. Addressing these shortcomings will require deeper cross-border collaboration, harmonized tax incentives, and stronger public-private partnerships. Without a concerted effort to nurture homegrown tech innovators, Europe risks seeing its most promising AI breakthroughs drift to capital-rich environments – a scenario the Code of Practice for general-purpose AI should work to avert.

Ideally, the Code’s final version will align with the AI Act’s core aims – democratic values, consumer protection, and fairness – without sinking every provider in unending compliance chores. Doing so could help Europe tap the full potential of a technology that has already boosted productivity. If the authors of the Code want to secure both Europe’s leadership in innovation and its reputation for rigorous standards, they will need to draw sharper lines between what truly prevents harm and what merely ties up resources. If they manage that, Europe can reassert itself as a dynamic hub for AI, equipped to compete with any region – and better positioned to turn the next wave of scientific discoveries into real-world results that benefit everyone.

Authors

Anda Bologa
Anda Bologa is a non-resident Fellow with the Tech Policy Program at the Center for European Policy Analysis (CEPA). The Barcelona Centre for International Affairs recognized Anda as one of the ’35 under 35′ tech leaders. During her tenure at the European Union Delegation to the United Nations, she ...

Related

Safeguarding Freedom of Expression in the AI Era

Topics