House Moratorium on State AI Laws is Over-Broad, Unproductive, and Likely Unconstitutional
Ellen P. Goodman / May 27, 2025Ellen P. Goodman is a professor at Rutgers Law School and co-director of the Rutgers Institute for Information Policy & Law.

The dome of the US Capitol building. Justin Hendrix/Tech Policy Press
Even if you like the idea of federal preemption of state AI regulation, the House budget proposal to freeze nearly all state AI regulation for a decade is a loser. According to the proposal, for the next ten years, “no State or political subdivision thereof may enforce … any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.” Besides almost certainly being unlawful, this order would hurt the very American AI ecosystem it supposedly seeks to advance.
The first thing to get straight is that the proposed moratorium is NOT federal preemption, though it has been called that. Federal preemption of state law can be express or implied, such as when federal law occupies the field or conflicts with state law. Since Congress has not substantively regulated in the AI space, notwithstanding hundreds of proposed bills, there is no field or conflict preemption. Express preemption is what many policymakers have tried to enact with federal privacy legislation – to pass new legislation and expressly preempt state laws. Of course, they never did.
The best argument for federal preemption is that having dozens of conflicting standards and requirements hurts regulated entities and those they serve. University of Texas legal scholar Kevin Frazier and R Street senior fellow Adam Thierer recommended express federal AI law preemption in a Lawfare piece earlier this month, arguing that conflicting state policies “risk disrupting the development of a transformative technology.”
The House budget proposal doesn’t actually seek to do the “go slow” “light touch” federal preemption that an AI-friendly Congress might attempt. Rather, it expands upon the suggestion of another Adam Thierer proposal for a “learning period moratorium.” For a dysfunctional federal legislative branch, this offers a way to shackle the states without doing much in their stead. Tacitly recognizing that the states have an important role to play, Thierer’s original proposal was relatively modest. Efforts like California’s aborted attempt to regulate foundation models were of particular concern. A Thierer-style moratorium would “block the establishment of any new general-purpose AI regulatory bureaucracy” to allow the industry to develop and the federal government to learn. Although again, a dysfunctional Congress is not a motivated learner.
The scope of the House budget proposal is much broader, seeking to kill not only model regulation but also almost all new state regulation of AI system deployments and algorithmic decision systems. The latter might include everything from regulating criminal sentencing to insurance premiums to school choice algorithms – areas Thierer acknowledges are in the heartland of state responsibility. While Thierer’s original proposal had suggested that states would still be able to impose transparency requirements to guide responsible AI deployment, the House moratorium expressly includes state “documentation” requirements. There goes transparency.
There is an effort among proponents of a state AI law ban to make it seem like this is just what happened with the internet. It must be said that federal forbearance from regulating internet-related technology, particularly after mobile diffusion, was probably not good from competition, innovation, and social welfare perspectives. In any case, the particular early internet-era precedent is quite different and shows just how extreme the House moratorium is.
That precedent is the Internet Tax Freedom Act of 1998, which imposed a three-year ban on state and local taxes on internet transactions. It is inapt in a few important ways.
- The first thing that must be asked of any federal legislation is whether Congress has the power to act. In the case of internet taxation, there was a strong claim that taxable internet transactions were inherently interstate and therefore within federal jurisdiction. When it comes to the moratorium passed by the House, there is no obvious federal power to act. AI deployments need not be “inherently interstate.” Suppose the state of Tennessee wants to regulate AI educational products made by a Tennessee company for Tennessee schools. It’s not easy to see the interstate component in that scenario. Right now, Texas is considering a bill to require state agencies to disclose their use of AI systems when Texans interact with them. That kind of law would seem to be banned under the proposed moratorium, even though there is no obvious connection to interstate commerce.
- The tax moratorium was targeted at a well-defined regulation – taxes. By contrast, the House moratorium has an expansive and vague scope. With some carve-outs, it covers ALL regulation of AI, defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” If this territory were not vast enough, the proposal goes on to include automated decision systems, meaning “any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues a simplified output, including a score, classification, or recommendation, to materially influence or replace human decision.” Data analytics that produce a score…Would that include a state’s or municipality’s data-informed rules for prioritizing building inspections? Looks like it.
- Finally, the ITFA moratorium was short (at least initially). Three years is a plausible “learning” period. By contrast, the House moratorium lasts for a decade – virtually forever in AI time. This is why state governments have called the proposal an infringement on state sovereignty.
Even if the ITFA precedent were an apt analogy, a narrow and short-term ITFA-style ban isn’t clearly lawful under 10th Amendment law today. The 2018 Supreme Court decision of Murphy v. NCAA took a hard line on when the federal government is “commandeering” state lawmaking by telling states that they may not regulate. Some scholars believe that this precedent would invalidate something like an ITFA tax ban. If that’s true of a relatively narrow imposition on what was in 1998 a rather small bit of commerce, how much more obnoxious to Constitutional anti-commandeering principles is a law that seeks to ban a vast, ill-defined area of state regulation that potentially reaches into every area of traditional state police powers?
The House’s proposed moratorium is of dubious legality and is not in line with past federal emerging tech policy bans on state regulation. It also might well depress the very vitality of American AI innovation and diffusion, which are its apparent goals. This is because regulation can support innovation by boosting business, individual, and government confidence in the technology. Utah, for example, in a pro-development move, decided to impose some rules on regulated professions’ use of AI. Another pro-development purpose of regulation is to offer entities some relief for ex post liability in exchange for ex ante compliance. Federal drug regulation works a little this way. This is also the idea behind a draft law in California to invigorate AI standards-setting and incentivize compliance in return for a liability safe harbor.
Regulation sped the diffusion of automobiles, pharmaceuticals, and many other technologies (for better and for worse). If the federal government were poised to step into the void, a short moratorium in well-defined areas of AI regulation might make sense. But we know it is not.
Authors
