Trump AI Action Plan Raises Legal Questions, Potentially Violates Constitution
Cody Venzke / Jul 31, 2025
Flanked by White House Office of Science and Technology Policy director Michael Kratsios (left) and special advisor for AI and Crypto David Sacks (right), US President Donald Trump signs Executive Orders at the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., Wednesday, July 23, 2025. (Official White House photo by Joyce N. Boghosian)
In the final hours before Congress passed the massive budget reconciliation bill just before the July 4 holiday, senators voted 99-1 to remove a proposed moratorium on the enforcement of state laws on artificial intelligence. But just two weeks later, the White House decided to do exactly what the Senate had just rejected: attempt to discourage state regulation of AI.
In its “AI Action Plan” released on July 23, the Trump administration is again attempting to forestall state regulation of artificial intelligence, despite the resounding “no” vote in the Senate. And just like the moratorium that was stricken from the reconciliation package, this attempt threatens significant harm, raises serious legal questions, and potentially violates the Constitution.
What is the AI Action Plan? What does it say about state laws?
The AI Action Plan is the Trump administration’s latest attempt to “achieve global dominance in artificial intelligence.” The plan is built on three pillars, the first of which is to dismantle “onerous” federal and state regulations. The plan envisions attacking state regulations on two fronts: through federal funding and through the Federal Communications Commission (FCC).
First, the plan recommends that the Office of Management and Budget (OMB) and other federal agencies “consider a state’s AI regulatory climate” when making decisions regarding “AI-related” federal funding for states. The Action Plan directs agencies to “limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” The scope of “AI-related” funds could be extensive, sweeping in programs funding broadband, education, cybersecurity, and more.
Second, the Action Plan recommends that the FCC “evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.” The FCC is an important agency for regulating specific technologies, but it is not a full-service technology regulatory body; it is empowered with specific authority over landline telephones, radio signals, and cable television. Consequently, directing it to evaluate state laws regulating artificial intelligence may seem misplaced — and as described below, it is.
Displacing state law opens the door for harmful AI
States have stepped up to address the ways that AI increasingly impacts all areas of our lives, including deciding who can rent an apartment, be admitted to school, offered a job, or offered a loan. States have led the way in passing legislation that regulates AI in state government and in critical areas of life like housing, education, employment, and credit. States are also engaged in robust debate over how to approach concerns over chatbots, generative AI, and deepfakes.
As states have been busy addressing AI harms, progress at the federal level has been limited. Instead, some federal policymakers have pushed to displace states’ efforts — including in the reconciliation package’s moratorium and now in the AI Action Plan. Neither the failed moratorium nor the AI Action Plan would create guardrails to replace the state protections that they’d displace, opening the door for unregulated, harmful AI across our lives.
The plan’s broad sweep is ripe for political abuse
The Plan’s directives are sweeping, and we should be cautious in assuming the Trump administration will apply it narrowly. For example, some commentators have suggested that there are few, if any, “AI-related” funds or pointed out that state laws will be impacted only if they “hinder” the “effectiveness” of a federally funded program. Consequently, they argue, the plan’s preemptive authority will be quite limited.
Admittedly, that is what the plan says, but the key terms here — “AI-related,” “hinder,” and “effectiveness” — are malleable and subject to weaponization.
For instance, “AI-related” programs may include education about AI, the infrastructure it relies on, or addressing the cybersecurity threats posed by AI — thus reaching funding that supports technology in K-12 schools, deployment of broadband in rural areas, or bolstering local governments’ cybersecurity.
Likewise, “hinder[ing] the effectiveness” of federal programs could mean directly targeting those programs or simply impeding or slowing their implementation — including by imposing testing or design requirements on AI developers. Given the Action Plan’s emphasis that AI development must be “unencumbered,” there is a real possibility the Trump administration will opt for the lower bar. If so, the state laws that “hinder” AI could encompass civil rights reporting requirements, environmental or zoning laws that affect data center siting, or privacy laws that protect our personal information. The directive for the FCC to evaluate state laws that “interfere” with administration of the Communications Act is similarly undefined.
That broad scope is compounded by the administration’s history of using federal funding as a cudgel against states and other institutions — including clawing back funds from New York for disagreements over immigration policy, from universities for student protests, and researchers for purportedly supporting “DEI.”
The potential scope of the Action Plan’s preemption is sweeping, opening the door for AI harms that states have been taking steps to regulate and for the administration to attack states as suits its political agenda.
Would displacing these laws be legal?
As always, the legality of the Action Plan will depend in large part on the details of its implementation. But if agencies read the Action Plan’s directives to their fullest extent, each of the two preemptive measures will raise significant legal questions.
First, the new conditions imposed on “AI-related” funding may exceed the federal government's powers under the Constitution’s Spending Clause. The Spending Clause permits the federal government to attach conditions to the funds provided to states and localities. However, that authority is not unlimited: there must be “clear notice” to the states of conditions at the time the funding is provided and the conditions must be related to the purpose of the funding, among other things.
The Action Plan potentially runs afoul of both the “clear notice” and purpose limitations. There is no clear notice if a state is “unaware of the conditions or is unable to ascertain what is expected of it” or if the federal government imposes “post acceptance or ‘retroactive’ conditions.” That is the case here — the Action Plan would retroactively condition existing federal funding on an assessment of the states’ regulation of AI. Similarly, the purpose of the condition — “unencumbered” AI development — would be unrelated to the many programs potentially swept in, including K-12 funding, broadband deployment, and cybersecurity funds.
Second, the FCC’s directive extends far beyond its traditional authority under the Communications Act. Over its 91 years, the Communications Act has been amended to extend the FCC’s authority over specific technologies: telecommunications services like landline telephones; wireless spectrum including cellular service, Wi-Fi, and broadcast radio and television; and cable television. Beyond those technologies, the FCC’s authority is quite limited, and courts have repeatedly pushed back on FCC efforts to regulate new technologies beyond its jurisdiction — including, most recently, regulation of broadband internet service.
In limited instances, some courts have upheld preemption of state laws for certain information technologies under a “federal policy of nonregulation.” In those instances, however, the underlying technologies were “inseparable” from those the FCC can regulate under the Communications Act, such as the ability to place telephone calls over the Internet. Other courts have subsequently emphasized that the FCC cannot preempt state law over technologies it has no authority to regulate — and called into question whether a “federal policy of nonregulation” really can preempt state laws at all.
What’s next?
The AI Action Plan is not a legal document, and it is not self-executing. Instead, it must be carried out by Executive Order and agency action — meaning that there will be opportunities for advocates to raise concerns and for agencies to course-correct. The FCC must engage with the public before undertaking rulemaking, and OMB and other federal agencies should be transparent when implementing conditions on federal funding under the Action Plan.
Critically, Congress should step up and conduct oversight. Several committees — including the House Committees on Energy & Commerce and on Oversight and the Senate Committees on Commerce and on Homeland Security & Governmental Affairs — have jurisdiction over these issues and should hold hearings. Those hearings should feature testimony from state officials working to combat harms from AI, including the Republican governors and attorneys general who opposed the moratorium in the reconciliation package. The committees can ensure that agencies are staying within their legal and constitutional boundaries and preserving state authority to address rapidly emerging AI harms.
Authors
