Home

Donate
Perspective

The Storm Clouds Looming Past the State Moratorium: Weak Regulation is as Bad as None

Kate Brennan, Amba Kak, Sarah Myers West / Jun 10, 2025

The rising tide of public opposition to the blanket ten-year moratorium on state AI regulation has been energizing: regulating the AI industry isn’t a radical idea anymore, if it ever was. It’s now common sense for the vast majority of Americans — and an increasingly bipartisan group of lawmakers and attorneys general — that the most powerful companies of our time can’t be treated with kid gloves.

This was the antidote we desperately needed to counteract the extreme deregulatory rhetoric gaining momentum from leading figures within the Trump administration. Since being elected, President Donald Trump has positioned regulation as a clear-cut way for the US to “lose” the global arms race, and his allies have propagated fears of Chinese control of global AI infrastructure as a threat to American security and democracy. At a series of high-profile events, including Davos, the Paris AI Action Summit, and the Munich Security Conference, the Trump administration’s message rang loud and clear: Global regulation is a targeted attack on US companies and the antithesis to innovation.

Meanwhile, the administration has called into question the independent status of enforcement agencies and fired key employees tasked with enforcing existing laws to rein in corporate dominance, including two Democratic Federal Trade Commissioners with a strong record on tech enforcement (the two Commissioners sued the administration, claiming their removal was unlawful). Spurring the administration on is the heightened rhetoric from regulatory opponents in the industry: the venture capitalist Marc Andreessen, a tech policy advisor to President Trump, has suggested that any deceleration of the AI industry is akin to murder. OpenAI has added to the deregulatory chorus by proposing that any federal requirements on AI firms should be entirely voluntary. And just yesterday, the US Chamber of Commerce published a letter on behalf of hundreds of businesses in support of the state moratorium.

In this backdrop, Anthropic CEO Dario Amodei’s recent New York Times op-ed calling for the industry to be regulated appears like the reasonable middle ground. It is certainly useful to have a prominent AI industry voice contribute to the growing opposition to the moratorium. But Amodei’s op-ed also serves as a reminder of another grave threat on the horizon: an industry-scripted federal standard that would effectively eclipse state legislation. This strategy—to recast a weak proposal as firm sensibility and ensure that industry leaders like Amodei are setting the terms of the debate—is a harbinger of an important fight ahead.

We must remain vigilant against a scenario that’s as harmful as no regulation itself: weak regulation that serves to legitimize the AI industry’s behavior and continue business as usual. A federal law that imposes baseline transparency disclosures and then restricts states’ ability to impose additional—or stricter—requirements could place us on a dangerous trajectory of inaction.

Industry attempting to set the terms of the debate is standard fare in the tech lobbyist playbook. In 2019, with a growing chorus calling for bans on facial recognition technologies in sensitive domains like policing, several tech companies pivoted from resisting regulation to claiming to support it, something they often highlighted in their marketing. The fine print showed that what these companies actually supported were soft moves positioned to undercut bolder reform. Eventually, Washington state’s widely critiqued facial recognition law passed with Microsoft’s support. The law prescribed optional audits and stakeholder engagement — a significantly weaker stance than banning police use, which is what many advocates were calling for.

More recently, AI companies spent 2023 insisting they were firmly “pro-regulation.” But as the center of power has shifted towards a deregulatory current, any superficial consensus on guardrails has quickly fallen away. For instance, OpenAI’s CEO, Sam Altman, went from testifying in a Congressional hearing that regulation is “essential” to lobbying against a minor safety provision in California SB 1047 in just fifteen months.

To be clear, there are some things we agree with Amodei on. We need to impose transparency requirements on an industry that gains its power from information asymmetries. These requirements must run across the entire AI supply chain and be tailored so that they’re providing meaningful information to the public writ large—especially in sensitive domains like our schools, our jobs, and our hospitals, where people are increasingly subject to AI technologies.

But make no mistake: disclosures are necessary, and urgent, but they are far from enough. We need to weed out the worst practices—the kinds of AI that should never be built at all—and put in place rules that strike at root of the extractive and invasive dynamics developing in the AI market: from turning a profit from our most sensitive data, to Big Tech leveraging their market dominance in the tech ecosystem to self-preference their AI and shut the door behind them. The drumbeat of legislation in the states is working to stop these abusive practices. And a transparency-focused mandate, of the kind proposed by Amodei, risks undermining these efforts to impose the bare minimum on tech companies.

Given the monumental stakes, blind trust in the benevolence of AI firms is not an option. Now, more than ever, we cannot let AI firms write the rules of their own game. We need an independent, publicly-led roadmap set by individuals and organizations on the ground, not AI firms acting like the kings of their own kingdom. In our 2025 landscape report, we spotlight a number of policy interventions that can effectively hold this industry to account. These include:

  • Bright line rules against the worst AI abuses
  • Ex ante validation and testing to ensure that AI systems work as intended and don’t cause ancillary harms throughout the full life cycle of AI deployment
  • Data minimization requirements that put constraints on firms’ ability to collect and repurpose data about us and limit secondary use to train AI models
  • Strong antitrust rules and enforcement to tackle anti-competitive behavior and address the concentration of power within the AI market

A fight still lies ahead to ensure that the moratorium doesn’t pass, disenfranchising states from protecting their own citizens. Seeing that fight through in the face of the extreme deregulatory push may produce unconventional alliances. But we can’t compromise on the long game, demanding that the public sets the terms for where and how the AI industry is held to account.

Authors

Kate Brennan
Kate Brennan is the Associate Director at the AI Now Institute, where she spearheads policy and research programs to shape the AI industry in the public interest. Prior to joining AI Now, Kate held multiple roles across the tech industry, including product marketing at Google and digital marketing f...
Amba Kak
Amba Kak is a leading technology policy strategist and researcher with over a decade of experience working in multiple regions and in roles across government, academia, and the nonprofit sector. Amba serves as Co-Executive Director of AI Now. Previously, Amba was Senior Advisor on AI at the Federal ...
Sarah Myers West
Dr. Sarah Myers West is Co-Executive Director of AI Now institute. Sarah recently served as a Senior Advisor on AI at the Federal Trade Commission, and is a Visiting Research Scientist at the Network Science Institute at Northeastern University and a Research Contributor at Cornell University's Citi...

Related

Analysis
The State AI Laws Likeliest To Be Blocked by a MoratoriumJune 6, 2025
News
US House Passes 10-Year Moratorium on State AI LawsMay 22, 2025
Perspective
The Big Beautiful Bill Could Decimate Legal Accountability for Tech and Anything Tech TouchesMay 27, 2025

Topics