Home

Donate
Perspective

The AI Deregulation Agenda Has Helped Create an AI Bubble and May Hasten a Crash

Amber Sinha / Sep 9, 2025

Amber Sinha is a contributing editor at Tech Policy Press.

US President Donald Trump delivers remarks at the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., Wednesday, July 23, 2025. (Official White House photo by Joyce N. Boghosian)

A little over a month ago, Euractiv journalist Thomas Moller-Nielsen warned that the global trade war instigated by United States President Donald Trump takes attention away from a much graver economic threat — the Trump administration’s deregulation agenda. His analysis drew from a recent speech by US Federal Reserve Governor Michael Barr that pointed out that instances of significant deregulation preceded “all three of the most infamous financial meltdowns over the past century, namely the Great Depression in the 1930s, the ‘Savings & Loan’ crisis in the late 1980s and early 1990s, and the 2009 global financial crisis.” The same arguments may be applicable to digital deregulation, which may exacerbate risks to the market from unregulated artificial intelligence products and services.

The US technology deregulation agenda

The Trump administration’s tech deregulation agenda was expressed forcefully by Vice President JD Vance in his speech at the AI Summit in Paris in February, which he used to admonish the European Union. He decried EU’s regulation of the technology sector, slamming the compliance burden it imposed on American companies and calling the EU’s Digital Services Act an example of overregulation and violation of free speech rights. Vance welcomed the shift from AI safety and governance to AI innovation at the Paris Summit, a trend that will likely continue in the next iteration of the summit in 2026 in India.

Even before the second Trump administration, the US had not advanced any significant AI regulation, apart from President Joe Biden’s 2023 Executive Order, and left it largely in the hands of the states. However, in May, Republicans doubled down on AI deregulation when the US House of Representatives passed a significant AI-related provision in a budget reconciliation package that included a proposed 10-year moratorium on any form of regulation of AI by state governments or local authorities. The Senate ultimately removed the moratorium, and eventually the package was passed without it. However, it lays bare the desire by Republicans in Washington to treat AI as a legislative subject where the federal government should have exclusive jurisdiction, and should exercise this jurisdiction with the explicit purpose of allowing AI development in a regulatory vacuum.

On its own, this is a dubious policy position for any constitutional democracy, given the growing body of evidence of harms resulting from unregulated AI, and the obvious transparency and accountability challenges it poses, especially when used to discharge public functions. At best, the proposed restriction on state laws, which seems likely to return in another form, can be seen as a more extreme and ideologically entrenched version of the US government’s historically hands-off approach to regulating digital technologies, based often on a debatable understanding of tradeoffs between establishing guardrails and ensuring innovation.

Assessing the AI bubble

A recent MIT study found that even though companies are spending a lot of money on AI—somewhere between $30 and $40 billion—most of them still aren't getting any real benefit from their investments. While there has been some skepticism about the study’s methodology and its narrow metrics of success, the result buttresses a point that several commentators have been emphasizing — that we are in an AI market bubble. The fact that Nvidia’s share of the US market is larger than any other company in the last 35 years underscores the oversize dependence on AI. Eight of ten companies worth over $1 trillion in the US market are in the AI business.

Not only that, AI-driven spending also supports other sectors of the economy which drive demand for AI infrastructure, namely data centers, semiconductor factories, and investments in energy supply. It is important to note the parallels with the dotcom bubble and crash of 2001 that led to a corresponding telecom crash. When significant infrastructure companies take on a great deal of debt to fund their expansion, along with the over-investment, it can cause the value of these companies to become inflated beyond their actual worth.

As evidence of AI being a financial bubble becomes harder to deny, OpenAI founder and CEO Sam Altman blames it on speculative capital chasing AI companies with weaker fundamentals, while simultaneously defending AI built by large companies like his where “fundamentals across the supply chain remain strong, and the long-term trajectory of the AI trend supports continued investment.”

Consider the financial viability of large models that fuel the current boom in AI. The cost of making these models is staggering. They require billions of dollars for gathering and labeling the training data, and processing it on massive computer networks. As assets, even if acquired at a discount in the event of valuation crash, these models would remain extremely expensive to use and run, and processing every query would require the buyers, as Cory Doctorow puts it, “to power the servers and their chillers.” So far, the investments have subsidized the costs to give an illusion of a working business model, and it remains unclear if there is a version of its future where paying customers can support the costs of building and running them.

The promise that Big AI is selling is that of the elusive artificial general intelligence (AGI), with regular announcements of its impending dawn. The truth remains that despite all their purported sophistication, AI systems continue to be fooled by simple things — small, irrelevant edits to text documents, or changes in lighting. Even minor ‘noise’ can disrupt state-of-the-art image recognition systems. If small modifications are made to the rules of games that AI has mastered, it can often fail to adapt. These limitations highlight the lack of understanding of the inputs these systems process or the outputs they produce, leaving them susceptible to unexpected errors and sometimes undetectable attacks.

How do you fix these errors? The answer, so far, has been a doubling down on the current approach to AI — adding more layers of networks and more data to correct these flaws, or more recently, exploring large reasoning models. These flaws may be an unremovable symptom of a programming system which accomplishes tasks without understanding what they are, and what such AI can achieve has a natural ceiling. The promise of AGI may remain just that, at least in the near future.

Exporting the deregulation agenda

All of this is exacerbated by the Trump administration’s deregulatory turn. Trump has removed regulatory barriers not only in the US, but also seeks to employ the tradecraft tools at his disposal to pressure other countries into relaxing their digital rulebooks for American companies. There is a separate discussion to be had about sovereignty as an essentially contested concept, how trade relations have come to shift its boundaries within acceptable norms, and how a maximalist US government is now blatantly ignoring these norms.

I mentioned above that simple errors in an automated system can deceive AI. The impact of these machine errors can be minor — extra steps to select pictures in a grid before you can log into a website, or extremely severe with high real-life costs — denial of benefits in a government program with an automated delivery system, or being locked out of a platform you rely on to run your business by way of some automated detection of a violation of community guidelines.

How we regulate AI should be a function of what it does — its purpose, its outputs and its impacts — rather than what it is or how it works. Creating regulatory exceptionalism premised on the necessity of AI innovation and growth runs against this logic. Regulations that ensure safety and governance when public and private systems use AI protect fundamental rights and public interest, but also make good economic sense by tempering unsustainable market valuations.

Authors

Amber Sinha
Amber Sinha is a Contributing Editor at Tech Policy Press. He works at the intersection of law, technology, and society and studies the impact of digital technologies on socio-political processes and structures. His research aims to further the discourse on regulatory practices around the internet, ...

Related

Analysis
Washington's Quest for AI Dominance Leaves Allies Between Rock and a Hard PlaceAugust 7, 2025
Perspective
Why Trump’s New AI Action Plan Spells Trouble for the Global SouthAugust 29, 2025

Topics