Critical Questions for the House Hearing Examining a Federal Restriction on State AI Regulation
Liana Keesing, Isabel Sunderland / May 20, 2025
June 13, 2024 - The United States Capitol. Justin Hendrix/Tech Policy Press
Last week, while headlines tracked President Trump’s trip to the Middle East, Big Tech quietly executed a legislative coup. Buried deep in the House Energy & Commerce (E&C) Committee’s additions to the sprawling budget reconciliation package was a sweeping provision imposing a ten-year federal moratorium on all state and local regulation of artificial intelligence. As written, it would effectively wipe out hundreds of state-level laws already enacted to address issues like child-targeted companion chatbots, scams against the elderly, AI-generated pornography, election deepfakes, and autonomous vehicles.
Because the language was inserted through the reconciliation process, it passed through the committee with minimal opportunities for bipartisan debate. It was a strikingly effective maneuver; after years of performative calls for “guardrails,” tech giants like Meta and Google have lobbied relentlessly on Capitol Hill and have secured exactly what they’ve long sought — regulatory immunity — without a single public vote.
The provision faces an uphill battle in the Senate: it runs afoul of the Byrd Rule, which blocks unrelated policy measures from reconciliation bills. But its mere appearance should sound alarms for all tech accountability advocates. This wasn’t a fluke; it was a test balloon. Preemption — sweeping, substance-free, and unaccompanied by federal standards — is fast becoming the central federal battle in the tech policy space. Just last week, Senator Ted Cruz (R-TX) previewed a forthcoming “light-touch” AI bill centered on federal preemption, echoing industry arguments that a patchwork of state laws creates confusion. Meanwhile, the House is drafting a comprehensive privacy bill that many fear will override stronger state protections in favor of weaker federal ones.
That’s why tomorrow’s hearing on “Seizing America’s AI Opportunity,” hosted by the House E&C Commerce, Manufacturing, and Trade (CMT) Subcommittee, is a rare and urgent opportunity to demand clarity. While we agree that strong federal legislation is the ideal path forward — one that protects consumers without placing undue burdens on small businesses — Congress has spent the past three years gridlocked on AI policy, managing to pass only a single significant bill: the Take It Down Act. In the absence of federal action, states across the political spectrum have stepped up to address emerging harms.
Every member of the CMT Subcommittee should treat this hearing as an opportunity to press for clarity and guard against a blanket preemption that shuts down public debate. This is not a partisan issue. Several Republican members hail from states that have enacted thoughtful, bipartisan AI laws, which the proposed moratorium would sweep away.
Chairman Gus Bilirakis (R-FL), a vocal advocate for children’s online safety, should consider how the moratorium would override state laws regulating child-directed algorithms and chatbots. In Kentucky, E&C Chairman Brett Guthrie’s (R-KY) home state, lawmakers recently passed a bill with overwhelming bipartisan support requiring disclosure when AI is used in public decision-making. Tennessee, home to Rep. Diana Harshbarger (R-TN), passed the ELVIS Act to protect artists from AI-driven voice cloning — an issue of particular concern in a state whose identity is deeply tied to country music, bluegrass, and the honky-tonks of Nashville. And of the 13 states represented by Republicans on this subcommittee, nine have already enacted laws to combat election-related deepfakes. The moratorium would dismantle precisely the kinds of narrowly tailored, state-level laws that lawmakers themselves often cite as models for responsible innovation.
As lawmakers prepare for tomorrow’s hearing, here are some critical questions they should be asking related to the potential for an AI moratorium.
The Top 5 Questions that Legislators Should Ask
- Does a blanket preemption assume that a rural community in the Midwest and a tech hub in California should be governed identically with regard to AI? To what extent should states have the flexibility to address the unique ways AI impacts their local contexts? How do we avoid a mismatch between a one-size-fits-all federal approach and the diverse on-the-ground realities across America?
- Tech companies have a history of moving fast and breaking things, sometimes at the expense of consumers. If states are effectively sidelined for 10 years, do you trust that AI companies will adequately self-police their products and services? Or is there a risk that there will be a spike in consumer harms (unfair algorithmic decisions, privacy invasions, AI-driven frauds, etc.) that could have been mitigated by more nimble state interventions?
- The Constitution gives states broad authority to protect public health and safety. On what constitutional grounds can Congress preempt that authority without offering a federal alternative? How does this moratorium square with the Tenth Amendment, which reserves powers not delegated to the federal government to the states, particularly in areas like consumer protection and civil liability?
- Proponents of the moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium” from the late 90s that prevented states from taxing internet access. They argue that just as a light-touch approach helped the early internet flourish, a pause on state AI rules will help AI innovation. However, that internet moratorium was narrowly tailored and focused explicitly on just taxes. Can any of the witnesses identify a precedent where Congress preemptively barred states from governing any aspect of a rapidly developing technology without establishing any federal regulatory framework, effectively leaving a legal vacuum? Particularly, has Congress ever done so in a domain that implicates not just consumer protection and safety, but also civil rights, labor, education, and economic autonomy at the state level?
- The current preemption language is written so broadly that it could block states from overseeing how AI is used within their own agencies. What is the pro-innovation rationale for preventing states from overseeing AI usage within their governments? If a state wants to ensure its unemployment office, DMV, or public hospital uses AI responsibly and transparently, why should federal law forbid that for 10 years?
Product Safety and Algorithmic Accountability
- 14-year-old Sewell Setzer III died by suicide after reportedly being emotionally manipulated by an AI companion chatbot built by Character.AI, a company founded less than five years ago. This is just one of several lawsuits emerging that are uncovering severe harms that these AI systems can cause, including hypersexualization, encouragement of suicidal ideation, grooming, and mental health deterioration. In light of these rapidly unfolding dangers, how can Congress justify a 10-year moratorium that would block states from responding to the new, AI-driven threats to child safety as they emerge?
- Meta’s AI chatbots have reportedly engaged in sexually explicit conversations with children, even after users identified themselves as being underage. Internal decisions, reportedly driven by Mark Zuckerberg, weakened safeguards to boost engagement, including exemptions to bans on explicit content. Tech companies like Meta have repeatedly prioritized profit over safety, rolling back protections, lying to the public, and allowing new products to exploit children for engagement. If a 10-year moratorium blocks states from acting, what concrete solution do supporters propose to protect consumers from an industry that has demonstrated a pattern of deception and harm?
- How are AI-driven recommender algorithms, deliberately optimized for engagement, fueling screen addiction and worsening the youth mental health crisis? Given that this committee has yet to pass a regulation to address this challenge, how will a 10-year moratorium on state laws do anything other than shield the very companies profiting from that harm?
- Autonomous vehicles and AI decision systems are already operating in states like Arizona and California. If this moratorium preempts local oversight, who is responsible when these systems fail and cause real-world harm?
- Industry advocates often assert that state-level algorithmic accountability laws, including transparency mandates and bias audits, are stifling innovation and creating uncertainty for developers. But many of these measures are narrowly tailored and supported by bipartisan coalitions at the state level.
- Can you point to concrete, verifiable examples where such laws have directly caused a startup to fail, halted product deployment, or materially slowed innovation?
- Absent those specifics, how should Congress evaluate the repeated claims that modest, targeted state regulations, many of which mirror long-standing consumer protection practices, are an existential threat to the tech sector?
Impact on Small Businesses and Local Economies
- A number of cities — San Francisco, Philadelphia, Minneapolis — have banned AI-driven rent-setting software used by large landlords after evidence that these algorithms were colluding to push rents up and reduce housing availability. Those local ordinances were meant to protect renters (many of them small businesses or local workers) from inflated rents and potential price-fixing by sophisticated AI tools. If the federal moratorium nullifies such city-level bans, what happens to those communities’ efforts to keep housing affordable? What economic impact could this have on local residents and mom-and-pop landlords in our districts if an algorithm is allowed free rein to hike rents and they have no local recourse?
- AI-driven automation is projected to displace certain jobs and disrupt local labor markets. Typically, states might respond by updating labor laws, such as requiring notice or severance when AI replaces a large number of workers, or setting up workforce retraining programs funded by fees on companies deploying job-eliminating AI. If measures like those are deemed to “regulate AI” and thus frozen, how can states mitigate sudden economic shocks in their communities?
Federalism and States as Laboratories of Democracy
- Our federal system empowers states to act as experimental labs for policy. We see that with AI right now; last year, lawmakers in 45 states introduced hundreds of AI-related bills. If Congress imposes a 10-year freeze on all these efforts, it is effectively closing down those opportunities to test different models for innovative legislation.
- How can Congress learn what works and what doesn’t in AI governance, if it forbids states from experimenting or tailoring solutions to their unique populations?
- To what extent does a one-size-fits-all federal timeout risk stagnating policy development, given that technology — and the harms from it — will continue to evolve?
Transparency, Disclosure, and Oversight
- Some states, like Kentucky, have passed laws to ensure that whenever AI plays a role in significant public decisions, like denying someone a job, a loan, health care, or insurance, the people affected are informed and the technology is evaluated for transparency. If the moratorium stops states from enacting or enforcing such measures, how will citizens know when an algorithmic decision made by the government impacts them or whether that AI has been vetted for discrimination?
- In 2020, California voters approved a privacy law that gives consumers the right to opt out of automated decision-making and to know when businesses use personal data in AI algorithms — tangible rights that are already in effect. The state’s privacy regulator has warned Congress that the moratorium “could rob millions of Americans of rights they already enjoy” by preventing enforcement of these new AI transparency and opt-out provisions. How does Congress justify a federal policy that removes a layer of consumer protection without replacing it with any equivalent federal standard?
Election Integrity and Deepfakes
- Although Congress has yet to pass legislation on this issue, 25 states, from Alabama to Massachusetts to Utah, have enacted laws addressing the use of deceptive AI-generated content in elections. Polling shows that more than 75% of Americans believe it should be illegal to use deepfake technology to influence elections. Why is it critical to safeguard the electoral process from AI-generated deepfakes, and what responsibilities should technology companies bear in preventing the misuse of their platforms for deceptive electioneering?
- How does preempting these state laws improve our ability to combat false information about elections? What is the risk that bad actors, including foreign adversaries, will see this as a green light, giving purveyors of deepfake propaganda a free pass until a federal regime is in place?
Authors

