Why Trump’s AI EO Will be DOA in Court
Olivier Sylvain / Dec 12, 2025
US President Donald Trump displays a signed executive order in the Oval Office of the White House on Thursday in Washington, DC. The executive order seeks to curbs states' ability to regulate artificial intelligence, something for which the tech industry has been lobbying. (Photo by Alex Wong/Getty Images)
The consensus view across the United States is that artificial intelligence companies should be more accountable for the ways in which their powerful models and services impact consumers.
Algorithmic systems enable unfair price discrimination in housing and on ride hailing apps. AI-generated deepfakes fuel the exploitation of young women and efforts to confuse voters. Large language models drive people to delusion, depression and self-harm. These threats have done a remarkable thing this year: lawmakers in red and blue states as varied as California, Colorado, Florida, Michigan, New York, Texas and Utah agree that it is time for policymakers to redress the unique consumer safety risks that AI-powered services pose.
All year, however, President Donald Trump has been threatening to block such laws unilaterally. Never mind the characteristic all-caps syntax and gratuitous race-baiting focus on “DEI ideology” and “Woke AI.” His language, more importantly, parrots the pro-innovation rhetoric of his Big Tech allies. Finally, this week, the White House published an executive order that purports to single-handedly stop the states in their tracks in the name of innovation and global competitiveness.
If only the president were as powerful as he imagines.
Trump and Big Tech oppose state AI regulation
The Big Tech CEOs are not dummies. They have seen big returns on their plainly transactional public support of the president. Just days after Silicon Valley’s most powerful CEOs sat behind him on the inauguration stage, for example, the White House announced the “Stargate Project,” a government collaboration with SoftBank, Oracle and OpenAI to spend $500 billion to build AI data centers and related infrastructure. It also rescinded Biden-era policies, including the Blueprint for an AI Bill of Rights. In July, the White House issued a blustery AI Action Plan that reflects its broad opposition to most regulation, which it referred to as “barriers to American leadership.” And just days ago, the president approved Nvidia’s bid to export its powerful AI computer chips to China after months of lobbying and flattery from the chipmaker’s CEO. If China approves, Nvidia stands to make billions.
These supposed wins have not stopped the administration, Trump’s Big Tech allies and dogmatic opponents of tech regulation. To some of them, state policymakers’ rules undermine “the nation’s efforts to stay at the cutting edge of AI innovation at a critical moment when competition with China for global AI supremacy is intensifying.”
Consider their failed attempt earlier this year to push through a federal preemption provision in a massive spending bill. Its sponsors contorted the provision to comply with budget reconciliation rules. But, even then, their plan flopped spectacularly in the Senate because of fierce opposition from state policymakers and consumer protection groups, not to mention a bipartisan group of senators. The administration over the past month again tried to include the preemption language in a defense spending bill, another unlikely place for it. This effort also failed.
The AI executive order is doomed from the start
Even so, the president has not given up. His administration has reverted to the one ostensible act of leadership that it knows too well: the executive order. The latest White House proclamation enlists the Department of Justice and the Federal Trade Commission. (Other provisions require various federal agencies to block state funding or study the laws’ effects, but the DOJ and FTC portions likely amount to the most direct assaults on the states.)
The order commands the DOJ to challenge state AI laws as violations of the Constitution’s Interstate Commerce Clause, as well as other unidentified federal laws and regulations. The order’s FTC provisions, on the other hand, require the agency to issue a policy statement that warns states not to “require alterations of the truthful outputs of AI models” based on its authority to protect against deception.
This latest order will probably also fail.
First, on their own terms, executive orders are not binding on the public. (Just take a look at the disclaimers at the end of any random EO, including this latest one, which typically caveat their scope.) They generally call on federal officials to do something. But civil law enforcement actions by DOJ and the FTC must always abide by laws and processes established by Congress. In other words, even if the president signs an order, the relevant agencies must still comply with their governing statutes if they are to achieve what the president commands. Do not mind the pomp of the signing ceremony in the Oval Office.
Moreover, federal agencies like the DOJ and FTC cannot encroach on lawful state regulations without a clear delegation from Congress.
One of the more instructive precedents on the point involved DOJ’s challenge two decades ago to Oregon’s Death with Dignity Act, a ballot initiative that authorized physicians to prescribe a lethal dose of medication to a competent adult with an incurable disease. In response to its passage, DOJ published an interpretive rule that restricted the use of controlled substances for physician-assisted suicide; such uses, it decreed, are not a “legitimate medical purpose” under the Controlled Substances Act. (Like policy statements, interpretive rules are not binding on the public.) A doctor, pharmacist and some patients sued to block the policy. The Supreme Court in Gonzalez v. Oregon sided with them, explaining, among other things, that Congress had not clearly authorized DOJ to take the action.
This is why, without any statute that comes close to addressing state regulation of AI, let alone one that preempts it, a DOJ or FTC attack on states would likely be dead on arrival.
Now, there is something to the argument that the local benefits of AI state laws do not outweigh the burdens on interstate commerce. This argument prevailed three decades ago in a case involving New York’s state anti-porn law, where a court found that the state’s regulation unduly burdened other states’ interests. Geofencing technologies, however, enable companies to tailor their services to specific states and, as a result, substantially diminished the interstate commerce concern. Regardless, states may tailor their laws to protect their residents. And that is what most of the AI state laws appear to do.
If there is anything in the order that portends a credible strategy, it is in the White House’s command that the FTC issue a policy statement that warns states about laws that alter “truthful outputs of AI models.” Among the AI industry’s concerns are state laws like the one in New York that impose restrictions on AI-powered pricing algorithms. Regulators and consumer advocates have argued that such services launder anti-competitive price-fixing in housing and retail markets. RealPage and others, meanwhile, have sued. They argue that they have the right under the First Amendment to use AI to make recommendations to landlords or whomever else.
The problem here is not so much in the legal authority the FTC has to promulgate a policy statement. Agencies across the federal government issue them all of the time. After all, again, they do not go through a process that renders them legally binding. The problem for the FTC is that this recent order charges it to issue something that does not fit well under the agency’s statutory deception authority.
No doubt, AI hallucinations are no good. But AI outputs are not fraudulent in the way that pyramid schemes or scams, the heartland of FTC deception authority, are. Generally, AI outputs are not falsifiable given that they are a function of the labels, proxies and variables that companies prioritize. This is why the agency has successfully gone after hyped up and fraudulent claims about AI uses or effectiveness.
This further underscores the urgency for Congress to enact federal laws that attend to the specific ways in which AI-powered services endanger consumers and businesses.
States step up when Congress does not
The White House’s ambition to unilaterally subvert red and blue state regulation of AI is an indication of the administration’s inclination to plutocratic despotism. The Constitution’s Tenth Amendment explicitly leaves to the states “the powers” that are not otherwise delegated to the federal government. Such powers touch a wide variety of bread-and-butter issues and problems in public life. It is for this reason that, nearly a century ago, Justice Louis Brandeis extolled the states as the “laboratory” of “experimentation” in a dissenting opinion involving an impressive new technology, retail refrigeration. If implemented, the order would lose a valuable resource for learning.
What is worse, given congressional inaction, consumers would lose the last viable way of holding exceedingly powerful companies responsible for the harms they cause. If the past three decades of the laissez-faire approach to platforms shows us anything, it is that policymakers should not allow the happy-talk about innovation or global competitiveness blind them to the ways in which companies may harm consumers. The risks here are too great to let that happen.
Authors
