Home

Donate
Perspective

AI Hype and the Capture of EU AI Regulation

Hannah Ruschemeier / Apr 30, 2026

This post is part of a series on Hype Studies that will appear on Tech Policy Press in 2026. More from the series is here.

European Commission President Ursula von der Leyen. Source

Writing about AI hype seems to create an interesting paradox: every contribution about AI, whether critical or not, seems only to feed the hype. At the same time, this apparent contradiction justifies the need for a critical theoretical approach of hype studies.

At first glance, hype and law also appear to be contradictory, as the setting of norms in democratic constitutional states follows established, formalized procedures. Democratic consensus-building is often time-consuming; in the context of digital regulation, this is frequently used to explain the 'legal lag'—the notion that law is slow and perpetually trails behind technical development. Hype works through speeding up processes, urging stakeholders to take hasty decisions. It creates urgency and pressure to act instantly, in contrast with legal thinking that rationalizes through procedural and structural reasoning.

Even if this observation is only partially accurate, it appears the law itself is succumbing to AI hype. This is evident, on the one hand, in the heated discussions in faculties of law regarding their role in the age of ubiquitous AI applications, the purported ‘revolution’ of the practical legal market by AI, and the field of digital regulation itself, where there is significant pressure to suppress regulation of AI out of concern it will harm innovation.

This essay will examine this by looking at the European (de-)regulation efforts of AI. The European Commission’s current 'Omnibus' proposal for the 'simplification' of EU digital regulation perfectly illustrates that AI hype has now arrived at the heart of European lawmaking. Through substantial influence over the production of regulatory knowledge, Big Tech seeks to achieve epistemic capture of the EU regulation process, showing how hype can dismantle public expertise.

Current political discussions and regulatory proposals in the context of AI reflect the success of what I call the “innovation narrative” around AI: AI, simply put, means innovation; innovation is desirable without restriction; and regulation hinders it. This narrative is fueled by the misleading picture of an “AI race” among the US, China, and the EU, as well as by the epistemic capture of AI.

The race metaphor misleadingly implies a clear finish line that can be reached first by a single actor, based solely on quantitative metrics such as processing speed. Vital socio-technical considerations—including sustainability, legal compliance, and alignment with societal values—are sidelined because they do not easily translate into quantifiable "race" data.

The current deregulatory developments at the EU level are not fueled by AI hype alone; they are, of course, the product of complex geopolitical tensions, shifting political majorities, and economic trends. Nevertheless, the influence of hype is clearly recognizable in the procedures, substantive design, and overarching systematic approach of current AI regulation.

First, Big Tech lobbying in Europe appears to have reached an unprecedented level. This phenomenon involves more than traditional communication with policymakers or the strategic placement of policy positions; it extends to the influence over regulation-relevant knowledge, specifically through the production of scientific contributions. In regulatory matters perceived as highly “technical,” scientific expertise is in high demand.

Second, the contributions from the research departments of global tech firms—spanning computer science as well as ethical and other normative questions of digital technology—have become overwhelming in volume. This specialized expertise carries even greater weight in regulatory debates when only a handful of companies possess the resources to develop the products being regulated—as is the case with the large foundation models underlying popular generative AI applications. Here, industrial actors enjoy a significant knowledge advantage.

Consequently, Big Tech companies—paradoxically both warning of AI’s disruptive potential and driving its advancement—exert substantial influence over the production of regulatory knowledge through scientific publications This economic dominance manifests in a subset of regulatory capture that I call "epistemic capture." This epistemic capture is hard for other actors in the field to overcome: University research struggles to compete, and smaller companies simply cannot afford to maintain corresponding research departments. Thus, AI hype not only lets Big Tech capture AI policy by staging a deregulatory innovation narrative as the only path to progress in EU stakeholder circles, but they are also increasingly the only ones who possess the legal and technical expertise to effectively conceive of an AI future.

As AI becomes ever more complex, the knowledge dependency between policy makers and Big Tech rises. Many contributions from major tech firms appear, on the surface, to critically engage with the societal implications of the technologies they develop. However, the case of Timnit Gebru, who left Google following a dispute over the clearance of a paper that remains highly relevant to today’s regulatory debate (On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?), demonstrates that all too often corporate research is fundamentally beholden to profit motives rather than the public interest. These mechanisms lead to a "knowledge closure" that favors conservative and orthodox regulatory approaches, or deregulation.

The European Commission’s de-regulatory turn: the Omnibus proposal

The Digital Omnibus Regulatory Package of November 19, 2025, bears the clear hallmarks of AI hype, exemplified by changes to the GDPR and the AI Act. Among the fundamental proposed amendments to the GDPR (General Data Protection Regulation) is a provision to permit AI training using special categories of personal data. This represents a departure from the GDPR’s foundational principle of technological neutrality, privileging AI training over other forms of data processing without any discernible justification.

The proposed Article 9(2)(k) and (5) GDPR introduces a new exception to the general prohibition on processing sensitive data under Article 9(1), specifically permitting such processing for the "development and operation" of AI systems as defined by the AI Act. This represents a clear and calculated dilution of data protection standards. By extending the existing exceptions in the AI Act, the proposal introduces a dangerous level of ambiguity. Paragraph 5 calls for "appropriate" technical and organizational measures when handling sensitive data, yet fails to define them. This shifts the entire burden onto the controller without requiring a proportionality assessment. The result is a dual failure: SMEs (Small and medium-sized enterprises) will struggle to document what constitutes "appropriate" safeguards, while dominant tech players effectively receive a green light to process sensitive data at scale.

Furthermore, this approach favors AI over other technologies without a formal risk assessment. The term "training an AI system" is too vague to function as a rigorous regulatory category, making meaningful enforcement nearly impossible. The absurdity of this "AI exceptionalism" is striking: an AI system creating deepfakes could theoretically benefit from this new permit, while a deterministic algorithm used in life-saving medical research would still be bound by strict consent requirements.

Even the provisions for high-risk AI systems—the centerpiece of the AI Act—appear to be a falling victim to AI hype. These requirements introduce obligations for transparency, human oversight, safety, and data management. Structurally, the AI Act is modeled after product safety law, allowing compliance to be demonstrated through standardization. The responsibility for drafting these harmonized standards lies with the private standardization organizations, CEN-CENELEC (European Committee for Standardization and the European Committee for Electrotechnical Standardization).

However, by the Commission's deadline of August 31, 2025, these organizations had failed to produce the necessary standards. The aforementioned criticism—that this process grants private actors excessive influence—has now been vindicated. Consequently, the application of these rules is set to be postponed by another year until the end of 2027 and 2028 for certain systems. This grants the AI industry an additional year to adjust to the framework, during which time it is relieved of compliance obligations.

From the perspective of democratic theory, this is 'not a good look': a regulatory framework decided through democratic procedures is being delayed by the failure of private organizations. In other sectors, legal certainty is maintained through the interplay of supervisory authorities, academia, and the judiciary as they apply and refine legal provisions; here, that process is stalled.

Don’t believe the hype!

Signs of AI hype are not confined to digital regulation; EU Member states increasingly view the deployment of AI in many sectors as an inevitability. This narrow perspective makes it difficult to address systemic root causes. Instead, AI is framed as a 'silver bullet,' as evidenced by the massive expansion of powers granted to police and security authorities (see the recent post by Elke Schwarz in this series) to utilize AI and data analytics tools.

Europe should not aim to replicate the scale of American Big Tech AI, nor is it in its interest to do so. Legal certainty, digital sovereignty, and a principled commitment to fundamental rights are not barriers to progress; they are the true foundations of sustainable development. Europe’s competitive edge lies in leveraging its unique regulatory identity to foster investment in infrastructure and SMEs, particularly within the strategic frontier of industrial AI.

Long-term public acceptance depends on sustainable technical development rooted in fundamental rights, a regulated yet free economic order, and robust democratic processes. Far from being a barrier, regulation is essential; a total absence of oversight does not guarantee innovation and can, in fact, stifle economic growth by eroding the rule of law: Regulatory decisions should not be driven by hype.

Authors

Hannah Ruschemeier
Hannah Ruschemeier holds the Chair for Public Law and Law of Digitalisation at the University of Osnabrück. Her research focuses on AI and platform regulation, data protection law and privacy, and democracy in the context of digital transformation. She is a fellow at IVIR, the Institute for Informat...

Related

Podcast
How to Study the Phenomenon of Tech HypeMarch 29, 2026
Perspective
How AI Hype Masks the Exploitation of African WorkersMarch 25, 2026

Topics