Buried in Congress’s Budget Bill is a Push to Halt AI Oversight
J.B. Branch, Ilana Beller / May 16, 2025
The United States Capitol.
While much of this week’s congressional media attention has focused on the proposed cuts to Medicaid that will remove health care coverage for millions, an effort backed by House Republicans, another consequential provision buried deep in budgetary language has received far less scrutiny, even though it will impact all Americans.
The US House Energy and Commerce Committee, voting along party lines, supported a measure that would preempt all state and local regulation of AI for the next ten years. The provision, buried in Section 43201(c) of the committee’s budget reconciliation proposal, would functionally strip the public of any meaningful recourse in the face of AI-related harm.
This would not simply be a pause in regulation. It’s a decade-long permission slip for corporate impunity. The provision is a message to the states: Sit down and shut up while Big Tech writes its own rules or continues to lobby Congress to have none at all. It is, in short, recklessly irresponsible.
For years, federal lawmakers have dragged their feet on AI oversight while state leaders, Republican and Democrat alike, have stepped up. In the absence of federal action, states have taken the lead in building safeguards where AI is rapidly reshaping our lives, including protections against deepfake election material, deepfake pornography, AI-generated child abuse material, algorithmic discrimination, and autonomous vehicle system liability. States took the lead in passing these laws when Congress failed to respond to the public being injured by autonomous vehicles, AI porn created by teens circulating through schools, and child suicides after interacting with AI chatbots.
Here is just a small sample of the state laws that could be blocked by the House Energy and Commerce’s 10-year moratorium on state-AI regulation:
- Two-thirds of US states have laws against AI-generated Deepfake Porn.
- Half of US states have laws against AI-generated deceptive election materials.
- Colorado’s comprehensive state AI Act establishes baseline consumer protections and accountability mechanisms for AI companies.
- Kentucky’s AI laws protect citizens from AI discrimination by the state and require citizens to be informed when decisions have been rendered by AI.
- Tennessee’s ELVIS Act protects against AI voice cloning, protecting artists against exploitation and unauthorized use of their likeness.
- A North Dakota law prohibits health insurance companies from using AI to make authorization decisions about treatments, mandating that any denials be made by a licensed physician.
- New York’s AI Bill of Rights provides civil and consumer rights protections to residents when dealing with AI systems.
- South Carolina’sstate supreme court directive “to ensure the responsible and secure integration” of AI, which includes restrictions on AI use involving confidential court records.
- Nineteen states have laws governing autonomous vehicles, all of which would likely be voided under the preemption provision.
- Utah’s groundbreaking legislation protects consumers interacting with mental health AI chatbots.
- California’s nation-leading AI laws include AI content disclosures and guidelines for how consumer data may be used to train large language models.
These laws and policies reflect the urgent work of state lawmakers and other policymakers to address risks and harms given Congress’s inaction. Many of these lawmakers have been spurred to action by constituents who came to them after experiencing harm resulting from AI products and services.
In an era of hyperpartisan politics at all levels of government, state lawmakers have worked collaboratively to enact policy solutions that seek to address some of the most significant harms caused by AI. The first few years of state AI regulation included the creation of numerous AI studies and task forces to assess the best approaches to protect constituents while promoting innovation. Many of the common-sense guardrails put in place have received overwhelming support from the public and legislators. For example, in many states, deepfake regulations have passed unanimously.
Champions of AI immunity claim they are protecting innovation from a “patchwork” of state laws. But US AI companies appear to be thriving under these very laws, with one recent valuation at $14 billion. The US leads the world in AI, and businesses across all industry sectors manage variations in state law in countless domains, including product safety, data breach notifications, and labor rights, without claiming national competitiveness is at risk.
The argument that AI regulation will cause the US to “fall behind China” has become the rhetorical crutch of the anti-regulatory crowd. They continue to beat the drum of the “AI arms race” that must be won against China by any means necessary, even with risk and harm to their constituents. To be clear, there is no evidence that consumer protection and global leadership are mutually exclusive. The public shouldn’t have to sacrifice civil rights, safety, or democratic oversight in the name of an imagined arms race.
During the markup, Chairman Brett Guthrie (R-KY) suggested that Congress will “get to” a national AI standard later. Yet, Congress has spent several years examining policy with little concrete action. This included the Senate organizing numerous “AI Insight Forums,” and the House commissioning a bipartisan task force to offer recommendations on federal AI regulation last year. But with the exception of the recent passage of the TAKE IT DOWN ACT, Congress has failed to pass any major tech legislation in recent years, including the failure of the Kids Online Safety Act to get a vote in the House, despite overwhelming support from the Senate. One must ask, why block state protections now, when no federal alternative exists, and offer only vague promises that something might follow more than a decade down the road?
This provision does not create a national AI standard. It creates a vacuum, a ten-year window during which AI companies could operate without accountability in nearly every domain that matters. No lawsuits. No local investigations. No transparency mandates. No new rights. No democratic debate.
We’ve seen this playbook before. Congress deferred action on social media. Then, the states tried to fill the gap. But it was too late to prevent many of the consequences we live with now: rampant disinformation, teen mental health crises, privacy abuses, and election interference.
The same mistake must not be repeated with AI, which could be far more impactful on people’s lives than social media.
If Section 43201(c) becomes law, it will not just delay regulation, it will signal to companies that the race to scale matters more than the public’s right to safety.
The future of AI should not be dictated solely by trade associations and Big Tech lobbyists. It should reflect public values, including accountability, fairness, and the right to seek redress when they are harmed. That is exactly what states did when they took the lead absent federal leadership. Indeed, this leadership on AI responsibility is a model example of federalism at work, an argument Republicans often herald as an imperative to limit the federal government’s power and empower the states to make decisions for their constituents.
Congress still has a choice: It can support a vibrant federal-state partnership on AI governance, or it can silence the only lawmakers who have acted to protect the public. The stakes could not be higher.
Authors

