Home

Donate

Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership?

Jimmy Farrell / Apr 8, 2025

Office of European Commission in Brussels. Shutterstock

Beyond sharing a pair of vowels, AI and the EU both present significant challenges when it comes to setting the right course. This article makes the case that reducing regulation for large general-purpose AI providers under the EU’s competitiveness agenda is not a silver bullet for catching Europe up to the US and China, and would only serve to entrench European dependencies on US tech. Instead, by combining its regulatory toolkit and ambitious investment strategy, the EU is uniquely positioned to set the global standard for trustworthy AI and pursue its tech sovereignty. It is an opportunity that Europe must take.

Recent advances in AI have drastically shortened the improvement cycle from years to months, thanks to new inference-time compute techniques that enable self-prompting, chain-of-thought reasoning in models like OpenAI’s o1 and DeepSeek’s R1. However, these rapid gains also increase risks like AI-enabled cyber offenses and biological attacks. Meanwhile, the EU and France recently committed €317 billion to AI development in Europe, joining a global race with comparably large announcements from both the US and China.

Now turning to EU AI policy, the newly established AI Office and 13 independent experts are nearing the end of a nine-month multistakeholder drafting process of the Code of Practice (CoP); the voluntary technical details of the AI Act’s mandatory provisions for general purpose AI providers. The vast majority of the rules will apply to only the largest model providers, ensuring proportionality: the protection of SMEs, start-ups, and other downstream industries. In the meantime, the EU has fully launched a competitiveness agenda, with the Commission’s recently published Competitiveness Compass and first omnibus simplification package outlining plans for widespread streamlining of reporting obligations amidst mounting pushback against this simplified narrative. Add to this the recent withdrawal of the AI Liability Directive, and it’s clear to see which way the political winds are blowing.

So why must this push for simplification be replaced by a push for trustworthy market creation in the case of general-purpose AI and the Code of Practice? I’ll make three main points: 1) Regulation is not the reason for Europe lacking Big Tech companies, 2) Sweeping deregulation creates legal uncertainty and liability risks for downstream deployers, and slows trusted adoption of new technologies and thereby growth, 3) Watering down the CoP for upstream model providers with systemic risk will almost exclusively benefit large US incumbents, entrenching dependency and preventing tech sovereignty.

In her book Digital Empires (2023), Anu Bradford, author of The Brussels Effect, outlines a number of reasons why tech giants akin to those in Silicon Valley (that have driven significant portions of US growth in recent decades) have not emerged across the Atlantic. Spoiler alert: it’s not because of regulation. As indicated in the timeline below, US Big Tech has existed long before the EU’s recent introduction of digital regulation enshrining societal safety protections, competitive markets, and basic fundamental rights. The EU’s tech ecosystem had ample time to emerge in the years preceding and following the turn of the century, free of so-called “red tape,” but this did not happen and will not again through deregulation. The reasons for Europe’s lacking tech sector are far more nuanced.

Timeline of the founding years of US Big Tech companies compared with the years certain EU digital policy came into force.

One reason presented by Bradford is that the European digital single market still remains fragmented, with differing languages, cultures, consumer preferences, administrative obstacles, and tax regimes preventing large tech companies from seamlessly growing within the bloc and throughout the world. Even more fragmented are the capital markets of the EU, resulting in poor access to venture capital for tech start-ups and scale-ups. Additional points include harsh, national-level bankruptcy laws that are “creditor-oriented” in the EU, compared to more forgiving “debtor-friendly” equivalents in the US, resulting in lower risk appetite for European entrepreneurs. Finally, skilled migration is significantly more streamlined in the US, with federal-level initiatives like the H-1B visa leading to the majority of Big Tech CEOs hailing from overseas, including Google’s Sundar Pichai, Nvidia’s Jensen Huang, and Microsoft’s Satya Nadella. This is compared to the EU’s Blue Card scheme, which only permits access to the issuing country and differs in application between Member States.

This list of obstacles preventing a thriving European tech ecosystem from matching that of Silicon Valley is deeply structural and long-term. I will not try to address them in this short piece, but I would point readers in the direction of the Draghi report on European Competitiveness, which contains bold suggestions to address the first three (digital single market integration, capital markets union, and bankruptcy laws), and in the direction of MEP Anna Strolenberg, who has compelling suggestions for attracting skilled talent. The downplaying of regulation as Europe’s AI hindrance has been repeated by leading industry voices such as US VC firm a16z, European VC firm Merantix Capital, and French provider MistralAI. To reiterate: the EU ‘lagging behind’ on trillion-dollar tech companies and the accompanying innovation was not a result of regulation before there was regulation, and is also not a result of regulation after.

Moving on to the idea of deregulation creating legal uncertainty, increasing liability risks for downstream deployers (the vast majority of the EU AI market), and slowing the trusted adoption of new technologies, history, time and time again, has shown the importance of clear guardrails for new technology. Whether for planes, cars, or drugs, early use of dangerous new technologies, without accompanying rules, saw frequent preventable accidents, reducing consumer trust and slowing market growth. Now, with robust checks and balances in place from well-resourced regulatory authorities, such markets have been able to thrive, providing value and innovation to citizens. Other sectors, like nuclear energy and, more recently, crypto, have suffered from an initial lack of regulation, causing industry corner-cutting, leading to infamous disasters (from Fukushima to the collapse of FTX) from which public trust has been difficult to win back. Regulators around the world are currently risking the same fate for AI.

This point is particularly relevant for so-called ‘downstream deployers’: companies that build applications on top of (usually) Big Tech, provided underlying models. Touted by European VC leader Robert Lacher as Europe’s “huge opportunity” in AI, downstream deployers, particularly SMEs, serve to gain from the Code of Practice, which ensures that necessary regulatory checks and balances occur upstream at the level of model provider. This increased legal certainty provides a safety net for members of groups like Digital SME Alliance who supported GPAI regulation in the AI Act (the legal basis for the Code of Practice), who stand to benefit from high safety and ethical standards upstream. This sentiment of upstream regulation creating legal certainty, trust, and subsequent flourishing markets is shared by MEP Axel Voss’ Head of Office Kai Zenner in relation to the recently withdrawn AI Liability Directive, touted by Finance Watch as a major concession to US model providers. Weak upstream rules will only result in the consequences of AI risks falling on the shoulders of smaller players downstream, the foundation of the EU’s promising future AI ecosystem, and thus runs contrary to the economic objectives of the competitiveness agenda.

Finally, the EU’s enduring and now potentially crippling dependency on US technology companies has been importantly addressed by the new Commission, best exemplified by the title of Executive Vice President Henna Virkkunen’s file: Tech Sovereignty, Security and Democracy. With the last few months’ geopolitical developments, including all-time-low transatlantic relations and an unfolding trade war, some have gone as far as warning of the possibility of US technology being used for surveillance of Europe and of the US sharing intelligence with Russia. Clearly, the urgency of tech sovereignty has drastically increased. A strong Code of Practice would return agency to the EU, ensuring that US upstream incumbents meet basic security, safety, and ethical standards whilst also easing the EU’s AI adoption problem by ensuring technology is truly trustworthy.

So, concretely, what needs to be done? Bruegel economist Mario Mariniello summed it up concisely: “On tech regulation, the European Union should be bolder.” The complexity of the EU, exemplified by its diverse and, at times, fragmented economy, and the complexity of AI, exemplified by its rapid development and multipart value chain, necessitate that EU AI regulation is an immense challenge.

This article has outlined why deregulating highly capable AI models, produced by the world’s largest companies, is not a solution to Europe’s growth problem. Instead of stripping back obligations, ensuring protections of European citizens, the EU must combine its ambitious AI investment plan with boldly pursuing leadership in setting global standards, accelerating trustworthy adoption and ensuring tech sovereignty. This combination will put Europe on the right path to drive this technological revolution forward for the benefit of all.

Authors

Jimmy Farrell
Jimmy Farrell is the EU AI Policy Co-Lead for Pour Demain, a think-tank working at the interface between technology and policy across national, regional, and international fora. Jimmy is currently working on policy recommendations for the EU to ensure the responsible development and deployment of ge...

Related

Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

Topics