Trump’s Anthropic Ban Is Lawless. Congress Must Respond with a Law.
Alan Raul / Mar 24, 2026The author signed an amicus brief supporting Anthropic in its dispute with the Pentagon.

President Donald Trump attends the dignified transfer of six US servicemembers killed in the Middle East, Wednesday, March 18, 2026, at Dover Air Force Base, Delaware. (Official White House photo by Abe McNatt)
President Donald Trump and his administration are so dead set on winning the AI arms race they’ve started an all-out, whole-of-government war against America’s current innovation leader, Anthropic. Despite having relied on Anthropic for missions in Venezuela and Iran, with Secretary of Defense Pete Hegseth calling the company’s technology “exquisite,” Trump and the military have now declared this American company to be a “supply chain risk,” banishing it like a foreign saboteur intent on infiltrating America.
Anthropic is suing the government, arguing that Hegseth’s designation violates the Constitution’s First Amendment protection of speech and the Fifth Amendment’s requirements for due process, and is ultra vires, an unconstitutional bill of attainder and arbitrary, capricious, and an abuse of discretion. A hearing is set for today (March 24) before a federal judge in San Francisco.
Whatever the outcome, this messy situation reveals that America needs a national policy to govern transformative AI, particularly when it comes to applications in the military and in the intelligence community. Congress should accept the administration’s March 20 invitation to enact a new AI law. Unfortunately, President Trump’s proposal for a “National AI Legislative Framework” would essentially just preempt most state AI laws without actually advancing substantive protocols for broad AI governance at the federal level.
The Trump administration is seeking to bar every federal agency and government contractor—not just the military— from using Claude and doing business with Anthropic. The company’s crime? It believes that keeping some rules and safety guardrails in place is appropriate to govern the deployment of the most advanced “frontier” AI technology the world has yet known.
The government, on the other hand, says it doesn’t believe in such regulation of frontier AI models. Its concern is that supposedly “woke” regulation stifles innovation. But Anthropic was excommunicated from doing business with the government—in brazen violation of its First Amendment and due process rights—because it declined to allow its “exquisite” AI to be used for hypothetically “lawful” purposes like mass surveillance of Americans and fully autonomous lethal weapons. (Note that the company apparently did not stand down from autonomous weapons for ideological or other policy reasons, but rather because it believes its products are not yet sufficiently reliable to undertake lethal decision-making on their own.)
These simple AI guardrails, for the administration, are the stuff of “radical … woke … left-wing nut jobs.” Compare these now-standard epithets with the same language in prior anti-constitutional outbursts against law firms, universities, broadcasters, district judges, Supreme Court Justices, etc. Of course, this is nonsense: it is necessary to soundly govern super-capable, frontier AI’s development and deployment. What responsible human would think otherwise?
Sound governance entails identifying and assessing risks of novel technologies and planning to mitigate potential catastrophic risks, such as transformative AI going rogue, being maliciously abused to build bioweapons or the like or, as Anthropic fears, not yet ready to be trusted for certain critical missions—like control over fully autonomous lethal weapons.
But the federal government is AWOL regarding meaningful AI governance. Its idea of winning on AI appears to channel the Mad Magazine cartoon character, Alfred E. Neuman: “What, me worry?” It seems no one in the White House is worrying or maybe even thinking about managing the ineluctable risks, tradeoffs, and mitigations of frontier AI.
In fact, the contrary seems true. The President has issued Executive Orders directing agencies to eradicate traces of AI governance in the federal government (allegedly with the goal of “Preventing Woke AI in the Federal Government”) and to stop state governments, like New York and California, from stepping up to require risk assessments, mitigation plans, transparency disclosures, and incident reports from frontier model developers. The President has even directed the establishment of an AI Litigation Task Force at the Department of Justice “whose sole responsibility shall be to challenge State AI laws inconsistent with” … “the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”
However, the administration’s March 20 “National Policy Framework for Artificial Intelligence” does not actually propose federal governance or reporting standards for frontier AI. The framework primarily asks Congress to preempt state AI laws and assure that any future federal law is “minimally burdensome.” The White House proposal does ask Congress to develop certain protections for children with respect to AI, and for AI misappropriation of personal images and likeness, etc. But with regard to the potential societal impacts of frontier AI — it’s basically just crickets.
It is not as though the White House is unaware that frontier AI can present material challenges to society. But it appears that the administration is concerned to mitigate the impacts of frontier AI only in the context of national security. In that regard, the President’s framework deviates from its otherwise “What, me worry?” mindset by proposing that:
Congress should ensure that the appropriate agencies within the national security enterprise possess sufficient technical capacity to understand frontier AI model capabilities and any associated national security considerations and establish plans to mitigate potential concerns, including through consultation with frontier AI model developers.
Think about this: How can it be that the most consequential economic, social, workforce, political, and national security technology we have ever faced as a nation is the subject of almost no sophisticated and thorough federal governance and planning outside of national security? (The lone exception is the Office of Management and Budget’s April 2025 directive to civilian agencies that develop or deploy “high-impact AI.”)
Thankfully, most of the important repercussions from super-capable AI systems are likely to be fabulous: new cures for disease, innovation and productivity gains to generate vast new wealth, unimaginable leaps in knowledge, and potential military superiority for America.
But inevitably some significant developments will not be fabulously great. How our society rises to meet the possibilities and challenges of frontier AI is the very epitome of a “major question.” The Supreme Court has repeatedly affirmed that under our Constitution it is for Congress, and not the Executive, to decide such major questions.
So, Congress must speak to these questions soon, very soon. But the approach of the administration to date is distressingly clear. It considers AI governance rules, and the rule of law applicable to frontier AI companies like Anthropic, as an impediment to its aggregation of power over this novel technology.
Make no mistake—the White House Executive Orders and its banishment of a frontier AI developer that defied the administration’s campaign to control the technology is a power grab, pure and simple. The White House will not allow the constraints of the Constitution and the rule of law to get in the way.
To be sure, the titans of technology and companies worth trillions of dollars must also not be allowed to dominate or extinguish rules for AI governance. That too would be undemocratic. But allowing the White House to stand in the way of sound governance of frontier technologies as it arrogates power (under a smokescreen of anti-wokeness) is not democracy either.
Congress must now think well beyond the White House’s meager new framework, and immediately begin to address frontier AI for the sake of the future of the public, and the Republic. It is the public that should benefit from the unimaginably wonderful promise of AI and that must also be protected from its imaginable risks.
Authors
