Home

Donate
Perspective

Debunking Myths About AI Laws and the Proposed Moratorium on State AI Regulation

Kara Williams, Ben Winters / May 28, 2025

Kara Williams is a Law Fellow at the Electronic Privacy Information Center (EPIC). Ben Winters is the Director of AI and Privacy at the Consumer Federation of America.

Last Wednesday, the House Energy and Commerce subcommittee on Commerce, Manufacturing, and Trade held a hearing on “AI regulation and the Future of US Leadership.” The hearing featured significant discussion about dangerous language in the House budget that would ban states from passing new AI laws or enforcing those already on the books for the next 10 years, stripping people of rights that their state legislators have enacted into law. The bill just passed the House and now advances to the Senate. If this measure were to become law, this power grab would leave Americans vulnerable to ongoing and future AI-driven harms and strip state lawmakers of their ability to protect their constituents.

The proposal to ban states from legislating without proposing any protections to replace those already enacted into law is irresponsible and unacceptable. Congress has failed to pass any meaningful laws protecting Americans from the harms of technology for decades. Our organizations, the Electronic Privacy Information Center and Consumer Federation of America, have asked Congress to pass comprehensive privacy legislation for over 25 years. Congress has failed to do so, and there is no reason to think that regulating artificial intelligence would be any different.

A majority of the witnesses at Wednesday’s hearing were aligned with the tech industry. There was substantial discussion of the idea that the US is in the midst of an arms race—with many Representatives focusing on whether the US will “beat China” in the AI “race.” This framing prioritizes speed of AI development over what America should be proud to strive for—safe and responsible innovation.

The technology industry does not want any rules placed on it, regardless of how clear or commonsense the rules are. Instead, the industry wants the government to fund its increasingly exploitative technologies and get out of the way, resisting any attempts at transparency and accountability. Given Big Tech’s outsized influence in the Trump administration, it is unsurprising that the majority in Congress is proposing blocking states from protecting their residents without any legislative proposal to replace those protections.

This piece aims to debunk the common arguments that industry-aligned groups use against commonsense regulation of AI in the hearing last week and in general.

Debunking common myths about AI regulation

1. Regulation is not the enemy of innovation.

Clear regulations can promote responsible innovation. Regulation provides certainty—clear rules of the road and a level playing field for all companies. Innovation or regulation is a false choice. This framing of the two as mutually exclusive prevents companies from zooming out to consider whether consumer protection, privacy, and civil rights are served by their products. The status quo favors pushing systems out to market as quickly as possible. In the current landscape, companies that take the time, spend the money, and put in the effort to test and refine their AI systems to ensure they are safe, accurate, equitable, and privacy-protective are placed at a competitive disadvantage compared to companies that skip these steps to put out their product first, even if it is inaccurate or discriminatory. Instead, regulation can encourage innovation; if all companies are required to create products and services that center privacy and other civil rights, it can incentivize a race to the top rather than a race to the bottom.

2. One single thing called “AI regulation” does not exist, and it’s unfair to paint it with a broad brush.

Many different kinds of artificial intelligence exist, ranging from simple algorithms to generative AI to automated decision systems. Some AI, like the generative AI-based ChatGPT that started the current AI frenzy, are relatively new, but many others are years or decades old, including algorithms used to place targeted advertisements or automated decision systems used to run background checks on potential renters, for example. Just as the technology is not all the same, neither are the laws that regulate it. Artificial intelligence is also used across sectors, from healthcare to the entertainment industry to government. Many laws regulate AI use cases in only one of these sectors. Just as with most things, AI can be used responsibly or for nefarious purposes, such as the creation and publication of nonconsensual intimate image deepfakes, which many laws prohibit. Clearly, just as “artificial intelligence” does not refer to one singular thing and is almost so general as to be meaningless, saying “AI regulation” is or does any one thing is similarly nonsensical. This breadth and variety of AI laws also means that not every AI company will have to comply with every AI-related law.

3. Transparency requirements won’t damage US competitiveness or lead to devastating competitive disadvantages.

Transparency requirements are often limited to basic information about when AI is used on an individual, what types of data go into the system, and how AI is integrated into a decision-making process. These disclosures do not require companies to share any trade secrets—in fact, laws are almost always explicit about exempting trade secrets. Requiring transparency provides immense benefits to individuals while placing little burden on companies. Transparency proposals that go beyond this basic information typically require disclosure only to state regulators and often only apply to larger companies. Nothing in these transparency requirements threatens the technology or companies’ trade secrets. If simple disclosures of this nature would be disastrous to any company, it perhaps points to a deficiency in the product, not the regulation requiring the disclosures.

4. Big Tech does not “welcome regulation,” despite having teams of lawyers capable of handling it.

Regulation sets out clear rules and creates a level playing field for all innovators, especially the prototypical “two people in a garage” startups. While Big Tech talked a big game in 2023 about welcoming regulation, Google CEO Sundar Pichai, OpenAI CEO Sam Altman, and Meta CEO Mark Zuckerberg have all recently urged Congress not to engage in any meaningful regulation, citing similar “race against China” concerns. Additionally, most regulations have thresholds for which entities will be required to comply with their requirements, whether it be by revenue, amount of data processed, or number of employees.

5. Self-regulation, including voluntary commitments or guidelines, is wholly insufficient.

Companies, particularly the Big Tech companies, have made promises about their responsible practices and lied about those promises over and over. We’ve seen this with alleged lies from Google about tracking location data and from Meta that it wouldn’t put facial recognition in its smart glasses because it was too creepy. After voluntary commitments were made by Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI in 2023, no meaningful transparency or accountability has come from it. Simply put, these companies cannot be trusted to self-regulate—there have already been stories of OpenAI and Google failing to live up to their promises to rigorously test their models before release.

State efforts to regulate AI are not overly burdensome

6. While 1,000 state bills on AI have been introduced this year, this number is evidence of proper state governing rather than excessive regulation.

A broad swath of state lawmakers introducing numerous bills with varying approaches to solving a problem is the core of how our country is supposed to work. One of the United States’ central tenets is federalism—allowing states to be the laboratories of democracy. As states take different approaches to regulating different aspects of AI, the federal government can learn from what has and has not worked well in various laws. Stripping state lawmakers of their ability to do their jobs regulating AI will deprive federal lawmakers of valuable lessons learned from states’ experiences, and any eventual federal legislation will suffer as a result. Further, many of these bills are not substantive requirements for companies and instead establish task forces, provide money for governments to adopt AI, and more.

7. US states, including Colorado, are not passing stifling or sweeping regulations that would place intense burdens on all companies developing or using AI.

While Colorado did pass a law regulating AI in 2024, the law only applies to the development and use of automated decision systems in life-altering decisions, such as access to housing and health care. Despite scare tactics advanced by industry-aligned think tanks, the law does not regulate spell-check or shopping recommendation algorithms. The law’s narrow focus on the highest-risk uses of AI means that most AI companies will not be affected at all.

In addition to its limited scope, this law does not put undue burdens on companies. Instead, it requires companies to tell people when they are using automated systems to make decisions about their lives and explain those decisions, implement a risk management program, and conduct impact assessments to ensure the safety of the AI systems they are developing or using. The law also grants limited rights to individuals who are subject to AI systems, including the right to correct their personal data and to appeal an AI-driven decision in certain cases, which research shows are rights Americans feel strongly about having. These requirements give Coloradans some knowledge and control over how AI is used to affect their lives, which is essential to ensuring transparency and accountability around AI, while requiring very little of companies developing and using these systems. Implementing these limited measures will not stifle innovation, but it will increase public trust in AI and help prevent AI from harming people.

Further, Colorado is the only state so far that has passed legislation of this kind. While other states have proposed or are still considering similar bills, they are largely modeled after Colorado’s law, meaning companies will not have to set up separate compliance systems for every state. The Colorado Legislature also gave companies ample time to come into compliance with the law, making the effective date a year and a half after enactment.

A 10-Year moratorium on state AI regulation is dangerous

8. A Congressional ban on all states and localities regulating AI for 10 years is overbroad, anti-democratic, and ties states’ hands for an irresponsibly long time.

The moratorium prevents any “law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.” This encompasses essentially all existing AI laws or regulations and is wildly overbroad, especially in the absence of any federal AI regulation.

Ten years is too long to wait to place commonsense guardrails on AI. Despite Rep. Jay Obernolte’s (R-CA) statement during the hearing that “No one wants this [moratorium] to be ten years … I want it to be months, not years,” the plain language of the text—supported by Rep. Obernolte and all other Republicans on the Committee—is a 10-year moratorium. If Rep. Obernolte does not want ten years without state action, he should not support a bill with a 10-year moratorium.

At a time when Americans are being harmed by the use of AI systems, it is an abdication of responsibility for Congress to propose eliminating states’ rights to protect their residents. America should strive to win the race of responsible innovation, not innovation at any cost. We need innovation we can all be proud of. We are all living with the consequences of Congress’s failure to regulate social media companies for decades, and we cannot repeat that mistake with AI—the stakes are too high.

Authors

Kara Williams
Kara Williams is a Law Fellow at the Electronic Privacy Information Center (EPIC) focusing on state privacy and AI policy. She graduated magna cum laude from the Indiana University Maurer School of Law and also holds Bachelor of Arts degrees in journalism and sociology from Indiana University. She i...
Ben Winters
Ben Winters is the Director of AI and Privacy at the Consumer Federation of America. Ben leads CFA’s advocacy efforts related to data privacy and automated systems and works with subject matter experts throughout CFA to integrate concerns about privacy and AI in order to better advocate for consumer...

Related

Perspective
Why Both Sides Are Right—and Wrong—About A Moratorium on State AI LawsMay 23, 2025
Analysis
Expert Perspectives on 10-Year Moratorium on Enforcement of US State AI LawsMay 23, 2025
News
US House Passes 10-Year Moratorium on State AI LawsMay 22, 2025
Podcast
Will a Moratorium on State AI Laws Advance in the US Senate?May 25, 2025
Transcript
Transcript: US House Subcommittee Hosts Hearing on "AI Regulation and the Future of US Leadership"May 21, 2025

Topics