The Preemption Fight Goes Far Beyond AI. States Must Persist.
Alan Butler / Dec 15, 2025
President Donald Trump takes questions from members of the media aboard Air Force One on December 9. (Official White House Photo by Molly Riley)
We have heard the message loud and clear these last few weeks: the Trump Administration and their allies in Congress want to block states from regulating artificial intelligence and claim we must have “ONE RULE” nationwide.
The question is: can they actually do that?
Certainly not through an executive order alone, even though they have been eager to push the narrative that their latest attempt should be seen as a serious effort. But even on the legislative side, there is little evidence of consensus at the federal level on what rules to govern AI systems should look like, and the proposals currently being considered include many different, and at-times overlapping and intersecting, sets of rules — not a single standard. These are complex issues that require a multi-pronged policy approach, as we have seen this year in California where the legislature passed more than a dozen new laws in this area.
The reality is that the states have been (and remain) at the forefront in addressing tech policy issues related to privacy, online safety and AI oversight over the past decade. The Trump administration and a few prominent Republican members of Congress have tried to block state lawmaking on these important issues, including AI. They failed in June, and they are failing again now in December. We will keep fighting this fight, and I hope they will keep losing.
What we are seeing this week in Washington is an escalation of a fight that has been brewing for years. Big Tech firms have been complaining about the “patchwork” of state laws ever since Californians adopted their landmark privacy law through a ballot initiative in 2020. And preemption has been a major focus of the debate around federal privacy legislation in recent years. But with those, the talks were bipartisan and the proposal was designed to establish a federal standard on privacy while preserving state authority outside that specific scope.
Over the past five years, states have been increasingly active on tech policy issues in the absence of federal leadership. This is the benefit of a federalist system — states serve as laboratories of democracy and advance policies despite federal inaction.
Over the past two years, as the rollout of general-purpose AI models and chatbots has accelerated, states have begun to focus on risks and the need for transparency, oversight and accountability to address the harms these systems can cause. This includes laws passed in California this year and last year, as well as in Colorado, Utah and several other states. States are identifying emerging problems and future risks and are working to address them through new policies. This is a sign of a healthy system at work.
The proposals that we have seen this year are different. Both the White House and congressional leaders in the House have pushed for sweeping preemption that would block states entirely from enacting or enforcing tech regulations. This unprecedented inversion of our federalist structure, they claim, is necessary to “remove barriers” to AI “leadership.” Yet there is no evidence that AI companies are unable to comply with existing laws or that they aren’t equipped to have their voices heard in the state policy debates (thanks to hundreds of millions of dollars they have poured into PACs this year).
Sen. Ted Cruz’s (R-TX) proposal to bar stars from enforcing AI laws for 10 years in the budget bill this summer was the first sign of how the strategy around preemption had shifted. The provision, which would have stripped funding from states enforcing AI regulations, was voted down by 99 of his fellow senators via an amendment by Republican Sen. Marsha Blackburn of Tennessee. The AI companies lost that battle, but clearly they were gearing up for a bigger war against the states.
Cruz returned to the issue again in September, arguing that his moratorium idea was “not at all dead”. The big news last month was the endorsement of that approach by both House Majority Leader Steve Scalise (R-LA) and President Donald Trump himself. Scalise said he intended to include a version of the moratorium in the annual defense authorization bill (NDAA), which Congress intends to pass by the end of the year. However, it quickly became clear that the effort lacked support from both the bipartisan leaders drafting the NDAA and the Senate more broadly; prominent GOP governors also opposed the idea.
Without a clear path to pass a ban on state AI policymaking through Congress, we started to see drafts of a new executive order titled “Eliminating State Law Obstruction of National AI Policy.” Let’s be clear: The president does not have the authority to “eliminate” state laws or to ban state AI regulation.
So what should we think of this executive order?
It is notably an implicit acknowledgment by the White House that the moratorium legislation remains doomed. In other words, this is the administration’s attempt to do something through executive action because Congress cannot pass its own AI moratorium proposal. But what actions can the president actually take or direct federal agencies to take that could prevent state policymaking? Not many.
The executive order signed Thursday directs the Commerce Department to make a “list” of state AI laws that “conflict” with their policy objectives, and then issue a “Policy Notice specifying the conditions under which States may be eligible for remaining [for federal broadband funding].” States on this Commerce list, under the policy, would be “ineligible for non-deployment funds, to the maximum extent allowed by Federal law.”
Does federal law allow the Commerce Department to unilaterally impose such limits on broadband funding despite Congress’s appropriation? The executive order doesn’t specify, but we can assume not since the main aim of Cruz’s bill was to impose such a restriction through legislation.
Next, the order calls on the Federal Communications Commission to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” However, the FCC does not have the legal authority to impose such a standard on AI companies, since the commission’s authority is limited to specific categories of communications service providers and its regulatory purview is confined to those categories.
The order also orders the Federal Trade Commission to issue a “policy statement” about preemption related to the agency’s unfair and deceptive trade practices authority, focused on “state laws that require alterations to the truthful outputs of AI models.” No such state laws exist, and they wouldn’t be preempted under the FTC’s statutory authority even if they did.
Finally, the order would direct the president’s advisors to develop a “legislative recommendation establishing a uniform Federal policy framework for AI” to act as a vehicle for preempting conflicting state AI laws. But that is much easier said than done.
As state policymakers and many stakeholders across industry and civil society can attest, establishing uniform standards for AI — or any technology — is neither simple nor straightforward. Indeed, the executive order implicitly acknowledges the complexity of this project by including an incomplete list of preemption carve-outs ending with an ominous “(iv) other topics as shall be determined.”
Trump and Cruz’s efforts to pause state AI laws before even outlining a proposal to replace them are an inversion of the most basic principles of regulating complex systems: start small, iterate and learn and continue learning to see what works and what doesn’t.
Despite the lack of traction for AI standards in Congress, efforts to preempt state laws will continue to be a focus in the House as the Energy and Commerce Committee is moving to advance several preemptive bills.
The Committee held a markup this week to consider 18 different “online safety” bills. Notably, this markup included modified versions of several bills that have seen bipartisan support in the Senate, including the Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0). These modified bills are both significantly weaker than the Senate versions and they include sweeping preemption provisions that would wreak havoc on existing state privacy and online safety laws. Even if House GOP leadership manages to pass these bills, it is not clear whether they would advance in the Senate.
But what is the potential impact of the executive order and the House bills on existing and future state policy work next year? The order does not (and cannot) preempt any state laws directly. But the order does call on the attorney general to create an “AI Litigation Task Force … whose sole responsibility shall be to challenge State AI laws.” What would that mean in practice?
The three legal concepts mentioned in the executive order task force provision include: the interstate Commerce clause, existing statutory preemption and “otherwise unlawful.” However, these are all typically examples of legal defenses used by a defendant who is being sued for violating a state law (or possibly grounds for an affirmative injunctive claim against future enforcement by that person).
The United States (via the Department of Justice) does not have standing or a cause of action to challenge state laws currently on the books. So, most likely, the DOJ might join or intervene in future cases raising these claims, as it did earlier this year in a case challenging California emissions standards. In that role, the DOJ might support existing arguments made by companies challenging specific state laws or enforcement actions, but they lack any special legal authority or power to influence any particular court’s analysis of those issues. The fact that both the Trump Administration and its GOP allies in Congress are pushing for new legal restrictions on state AI laws shows that they are aware that current law does not meaningfully limit those laws.
The language in the recently released House versions of several online safety bills differs significantly and would pose a major obstacle to the passage and enforcement of a wide range of state privacy, consumer protection, competition and other laws (including many specifically focused on AI). The current House proposals would invalidate and block a sweeping array of state authorities due to the included preemption provisions that prohibit all states from enacting or enforcing any law, regulation, etc. that “relates to the provisions” of the bills (including the modified KOSA and COPPA 2.0).
This type of preemption would have a truly radical and unpredictable impact on states’ ability to regulate technology companies and services because nearly every regulation of tech firms “relates to” their data and design practices. All state laws enacted to defend privacy or to protect kids online would be nullified and replaced by whatever weak federal standard Congress includes in the ultimate bill. That means courts could invalidate any state privacy provision if Congress passes the modified COPPA 2.0, or any state online safety provision if Congress passes the modified KOSA. Since courts rely on congressional intent in interpreting the scope of a preemption clause, the fact that lawmakers have made clear their goal of creating one federal rulebook will likely lead courts to invalidate a wide range of state laws.
All individuals and groups dedicated to protecting individuals from abusive and harmful practices online should speak out against the efforts by the White House and members of Congress to undermine important state protections and states rights. Americans deserve well-considered policymaking from their leaders. This is the opposite.
Authors
