Home

Donate

Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem

Mia Hoffmann, Owen J. Daniels / Mar 10, 2025

Mia Hoffmann is a Research Fellow for AI governance and Owen J. Daniels is Associate Director of Analysis and Andrew W. Marshall Fellow at Georgetown’s Center for Security and Emerging Technology.

February 11, 2025—Ursula von der Leyen, President of the European Commission (left), meets with US Vice President JD Vance (right) at the Paris AI Summit. Source

Natural ecosystems are delicately balanced. Ecologists have identified keystone species as linchpins of ecosystems, playing outsize roles in shaping their environment that often become most conspicuous in their absence. Keystone species are not always apex predators but nonetheless help keep potentially disruptive features of the environment in check. Elephants, powerful herbivores, maintain the African savanna’s grassland ecosystem by eating trees and shrubbery that would otherwise impede the growth of grasses; grazers like antelopes and zebras can accordingly flourish, lions and hyenas can feed on the grazers, and other species up and down the savanna’s food chain can thrive as they have adapted to do over time. Without elephants, the savannas might start taking on characteristics of woodlands, allowing for disruptive environmental changes.

Consider for a moment the analogy of the global AI policy ecosystem. The major players in this environment are the United States, China, and Europe. By virtue of characteristics like developer talent, domestic companies, investment capital, natural resources, and access to enabling technologies like computing and energy resources, among others, these actors are the world leaders in AI, with respective strengths. The US is home to many of the world’s leading AI firms, particularly those developing foundation models (and LLMs in particular). China stands alongside the US and arguably leads in some areas—for instance, in developing real-world AI applications and, after DeepSeek’s R1 model release, increasing the possibilities for the proliferation of powerful open models accessible by a broader range of users.

The role of Europe, and the European Union specifically, in the AI policy ecosystem is nuanced. It is not home to a concentration of leading developers in the same way as the US and China. It has not been a leader in investment or producing talent. But as a body of advanced, wealthy economies with strong consumer bases, Europe has played the role of a thoughtful regulator seeking to protect its AI consumers from the risks and harms technology products could cause to society. Europe’s regulatory-first approach has arguably not been without its faults and has highlighted the challenges of trying to regulate emerging technologies preemptively. However, it has undeniably kept questions about governments’ roles in risk mitigation and citizen welfare at the forefront of global AI discussions. If the EU’s role in the AI ecosystem is changing, what could it mean for the other actors and the health of the overall ecosystem?

Recently, there has been a demonstrable shift in European policymakers’ messaging around AI governance. In her speech at the Paris AI Action Summit in February, European Commission President Ursula Von der Leyen promised to cut red tape to support AI business development and announced major investments in AI infrastructure in the EU. French President Macron similarly urged to simplify the EU’s rulebook and “resynchronize with the rest of the world,” stating the EU was “back in the race” for AI leadership. Commission Executive Vice President for Tech Sovereignty, Security, and Democracy Henna Virkkunen’s statements at the summit aligned with other European leaders, as she pledged to implement the AI Act “in an innovation-friendly manner” and to ensure the Code of Practice for General-Purpose AI would not create “any extra burden.” She also announced efforts to simplify the EU’s AI and tech regulation as part of upcoming omnibus proposals, legislative packages used to amend existing regulation. A few days later, the European Commission withdrew the proposed AI Liability Directive from its work program.

Following many years of dedicated efforts to regulate and oversee digital technology in EU markets, these maneuvers could indicate a clear shift and strategic signaling aimed at presenting the EU as an open, competitive, and innovation-friendly market. If the EU follows through with this strategy to try to attract more business and investment in AI, it risks weakening its unique position and influence in the global AI ecosystem and will likely weaken global AI governance efforts. As we have argued before, the EU’s AI Act serves as a regulatory blueprint for other countries, and the framework’s implementation has the potential to be equally influential. State-level governments in the United States, like Colorado, and nations outside of Europe, like South Korea, have used the AI Act as reference points as they developed their own artificial intelligence legislation. If the EU adopts a more business-friendly interpretation of the AI Act’s requirements, the result could be weaker risk management and protection from AI harms in Europe and globally.

Evidence does not suggest that the rest of the AI policy ecosystem is prepared to adopt the EU’s unique role in AI governance. The US appears firmly headed toward a deregulated approach to AI development. The US already took a soft-touch approach to AI oversight under the Biden-Harris administration, primarily based on voluntary commitments to safety best practices from the private sector. The Trump administration is reviewing and dismantling the little governance infrastructure that the previous administration established after rescinding Biden’s Executive Order on Safe, Secure, and Trustworthy AI in January. It has also issued a new Executive Order, Removing Barriers to American Leadership in Artificial Intelligence, that aims to “sustain and enhance America’s global AI dominance” through “AI systems that are free from ideological bias or engineered social agendas.” OMB Memoranda M-24-10 and M-24-18, which regulate the use and acquisition of AI systems by federal agencies, are also under review. Vice President JD Vance doubled down on this deregulatory approach in his speech at the Paris summit, stating, “I am not here to talk about AI safety,” and the US did not sign the summit’s final declaration.

China’s role in the global AI governance and regulatory ecosystem is complex. On the one hand, the PRC appears to share similar concerns about risks from powerful AI models as Europe and the US, having developed legislation aimed at mitigating risks from AI systems domestically. However, China’s approach to AI governance differs from that of the US and the EU, as some of these measures appear geared toward minimizing risks posed by generative AI to China’s political stability rather than protecting citizens from AI harms. The PRC has also established several multilateral governance initiatives, most prominently the Global AI Governance Initiative and the Shanghai Declaration on Global AI Governance. These efforts have attracted praise for their efforts to include participation from developing countries. However, critics have noted that these initiatives are light on specifics and primarily serve to advance China's geopolitical interests and position in emerging markets. AI governance with Chinese characteristics thus seems to entail some significant differences to the EU’s approach that would prevent it from playing a similar role in the ecosystem.

Time will tell if the EU follows through with the approach it signaled in Paris last month, but key questions remain. Why now? What could be motivating the EU’s changes? It is unclear whether a new approach is based on proactive or reactive factors. Does it believe the long-term economic benefits of increased investment and powerful AI models warrant a change in strategy? Or is it reacting to trade pressures and the threat of retaliation from the US if it cracks down on American technology companies? Another question pertains to the EU’s room for maneuver. Aside from investments, what can the European Commission really do to stimulate AI business and innovation on the continent now that regulations have been enacted? And is deregulation the right approach to address the barriers facing European businesses when it comes to AI? Given the effort that went into crafting and agreeing on the legislation among member states, is it practical to reopen some aspects of it to new negotiations?

In short, if the global AI governance ecosystem loses its keystone species, the rest of the environment will likely feel the impact.

Authors

Mia Hoffmann
Mia Hoffmann is a Research Fellow for AI governance at Georgetown’s Center for Security and Emerging Technology. In her work, she explores international approaches to AI regulation and studies AI harm incidents, building a deeper understanding of failure modes and the efficacy of risk mitigation pra...
Owen J. Daniels
Owen J. Daniels is Associate Director of Analysis and Andrew W. Marshall Fellow at Georgetown’s Center for Security and Emerging Technology, where he researches military and AI governance issues and supports the Analysis Team portfolio. He previously worked in the Joint Advanced Warfighting Division...

Related

The EU AI Policy Pivot: Adaptation or Capitulation?

Topics