Home

Donate

“The Chicken Or The Egg” Of AI Regulation

Hugo Neuillé / Jan 22, 2025

Yutong Liu / Better Images of AI / Joining the Table / CC-BY 4.0

The rapid development of AI technology has been matched by a surge in efforts to regulate it. Policymakers are working hard to keep pace with this fast-moving industry, resulting in an exponential increase in texts, bills, laws, and frameworks for AI regulation. In 2024, state lawmakers in the United States introduced nearly 700 pieces of AI legislation. This overwhelming number of new regulations creates significant uncertainty for developers without necessarily improving consumer safety.

Creating efficient, relevant, and lasting regulations requires several key factors. First and foremost, policymakers need a working definition of the object of their laws, which requires thorough work to capture the essence of what will be affected by their text. This is a challenging task in the case of AI because its definition remains in flux as the technology evolves. The OECD's decision to update its official definition of an artificial intelligence system in November 2023 to ensure “continued relevance” illustrates that dynamic. They also need to fully understand the impact of this technology on various aspects of society to create an efficient scope and reduce blind spots that could undermine their efforts. Finally, they need to anticipate future developments and trends to ensure that their approach remains relevant in the long run.

Global Competition in AI Regulation: Being First Isn’t Always Best

These regulatory challenges create a competitive environment for policymakers trying to set a potential golden standard. Governments believe they have the potential to gain leverage by proposing frameworks that could shape the future of this global industry. This is particularly true for actors that fall behind in the technological race, like China and, even more so, the European Union.

The EU adopted its AI Act in June of 2024. This landmark legislation marks a significant step in shaping the development and deployment of AI. However, even though the Act is comprehensive, the crucial question remains: will it be relevant in the long term? The tech industry is characterized by rapid change, and its very nature gives rise to unexpected capabilities and features that continually challenge existing regulatory frameworks.

This brings us to a paradoxical challenge: regulators must simultaneously anticipate future market trends and the trajectory of technological development while responding to the realities of an already complex and chaotic environment. This situation, coupled with the pace of innovation, is hard to navigate. Attempts at regulation risk being rendered irrelevant rather quickly.

A striking example of this rapid regulatory obsolescence is China’s attempt to establish itself as an early leader in AI regulation. In November 2022, China introduced the Provision on the Administration of Deep Synthesis Internet Information Services, marking one of the first regulatory efforts to address generative AI technologies. This regulation included foundational requirements, such as mandating disclosure of synthetic content to users and implementing clear labeling systems to mitigate the deceptive potential of such tools.

At the time of its release, the provisions were sufficient to address the steady but limited scope of generative AI technologies, which were still poorly understood outside niche circles. However, the term “Deep Synthesis Internet Information Services” has remained largely unknown to the public and is not used outside specific regulatory and academic contexts, primarily in China. Meanwhile, the underlying technology has been widely adopted since 2022, fundamentally challenging the relevance of the regulation issued by the Cyberspace Administration of China (CAC).

The timing of China's legislation is particularly notable. It was released on November 25, 2022, just days before OpenAI launched ChatGPT to the public on November 30, 2022. This product fundamentally reshaped the public’s relationship with generative AI. While the term “generative AI” had existed long before ChatGPT, it was previously confined to niche, tech-savvy circles. The release of OpenAI’s tool, offered for free to the public, thrust generative AI into mainstream awareness, rapidly forcing individuals and businesses to reconsider their understanding of and engagement with these technologies.

ChatGPT’s accessibility marked a paradigm shift. Generative AI suddenly became a widely recognized and understood technology, significantly lowering the barriers to adoption for businesses and individuals. As a result, entirely new areas of concern emerged, ranging from academic debates and media coverage to the fears of low-skilled workers about job displacement.

This unprecedented surge in generative AI’s popularity created uncertainty for policymakers about how to navigate the new landscape. There was an urgent need for frameworks, definitions, and language to fully understand the impact of this technology and how to frame it. As the technology outpaced expectations, earlier regulatory efforts to address these tools quickly became inadequate and obsolete, leaving policymakers scrambling to catch up.

This is precisely the situation Chinese regulators faced in their initial efforts to address the generative AI sector. The basic provisions outlined in the law were insufficient to address the profound societal impacts of generative AI’s widespread adoption. The attempt to establish China as an early player in AI regulation was overtaken by the pace of technological progress and private-sector innovation, rendering even the terminology obsolete.

In response, less than a year later, the Cyberspace Administration of China released new regulations. On May 2023, the Interim Measures for the Administration of Generative Artificial Intelligence Services was adopted, coming into effect on August 15 of the same year. Notably, the legislation embraced the more universally recognized term “generative AI,” reflecting the need for regulators to adapt to evolving language and public understanding.

It is important to note that this situation does not stem from the Chinese government's inability to develop regulations for AI. In fact, China has been a very early player in crafting nationwide bills to frame the development and deployment of AI. In that regard, some provisions issued by the CAC remain more specific regarding providers' obligations than what the EU is proposing.

While this case study may seem minor, it is particularly illustrative of the current situation. Policymakers face a monumental challenge in regulating AI; as technology advances at an unprecedented pace, governance efforts struggle to keep up. Two core factors make this task particularly laborious. First, the very nature of AI itself creates unique regulatory hurdles. AI systems are inherently unpredictable, with capabilities often emerging unexpectedly during their operation. These systems may eventually function as complex computational entities, evolving beyond the sum of their components and defying traditional methods of control or oversight. Crafting regulations that can anticipate and manage such dynamic behavior is notably tricky.

Second, unlike earlier technological revolutions, such as the internet, where governments were deeply involved in funding and shaping development, AI has been almost entirely driven by private entities and private capital. This near-total privatization of innovation has left public institutions on the sidelines, limiting their understanding of the technology and ability to craft informed, forward-thinking policies. The result is a stark decoupling of innovation from governance, with regulators perpetually playing catch-up in an environment where technological advancements outpace the frameworks meant to oversee them.

The Consequences of Falling Behind

The consequences of these challenges could become severe in the coming years. AI’s advancements are notoriously hard to predict, with ever-more powerful models emerging. The emergence of dual-use capabilities in publicly available models introduces substantial risks. Malicious actors could weaponize models available to the general public to carry on cyber and/or kinetic attacks that could lead to major economic loss and potential loss of life, which stresses the need for comprehensive oversight even more. Furthermore, advanced AI systems with greater autonomy pose unprecedented challenges for our democracies because those tools could harm electoral integrity and deeply damage the public debate and trust in our institutions. Finally, the private sector’s multibillion-dollar investments in pursuing artificial general intelligence (AGI) could force a complete regulatory reset where such a technology surpasses what we have observed to date.

Building efficient structures to recouple technological research and governance efforts is crucial. Synchronizing those two forces would lead to a self-reinforcing loop of mutual understanding and objective alignment, allowing us to escape this constant race between policymakers and industry leaders. In that regard, the work done by AI Safety Institutes is central, and the success of the EU AI Office and the US AI Safety Institute in attracting major players from the private sector is valuable. Promoting knowledge exchange through a global network of Satey Institutes is the best strategy to avoid sparse efforts that could not address globalized risks adequately.

Political Roadblocks Ahead

However, these efforts remained threatened. The incoming Trump administration might retreat from Biden’s effort to promote AI safety and fall back to a self-standardization regime for the private sector, further increasing challenges faced by policymakers. Similarly, in Europe, the decision of French President Emmanuel Macron to rename the upcoming 2025 French “AI Safety Summit” the “AI Action Summit” illustrates the hardship of synchronizing policymaking efforts with a power-driven approach that is appealing to governments struggling for a competitive edge.

Even the developers of these cutting-edge technologies often struggle to fully grasp their societal implications. The fact that terminology itself is in flux underscores the urgency of this issue. Without concerted efforts to study these technologies and develop a shared vocabulary to define them, we risk falling into a vicious cycle of regulatory inadequacy. Promoting intellectual engagement with AI’s potential impacts and fostering knowledge that enables society to collectively label and frame these phenomena is critical. This foundational work is essential to achieving long-term safety while unlocking the transformative benefits of promising technologies.

Authors

Hugo Neuillé
Hugo Neuillé holds a Master’s degree in Global Security and Cybercrime from NYU, where he specialized in the intersection of emerging technologies and international security. Recently, he led a consulting project for the US Department of State’s Global Engagement Center, delivering policy recommenda...

Related

The End Of The Beginning For AI Policy

Topics