Defining Moral Reasoning as ‘Supply Chain Risk’ Threatens America’s AI Advantage—and Democracy
Laura MacCleery / Mar 11, 2026Laura MacCleery is a policy expert and legal advocate with 26 years of experience in democratic institutional design across campaign finance reform, congressional ethics, consumer protection, and civil rights policy.

Defense Secretary Pete Hegseth listens as President Donald Trump speaks to reporters while traveling aboard Air Force One en route from Dover Air Force Base, Del., to Miami, Saturday, March 7, 2026. (AP Photo/Mark Schiefelbein)
Last month, the Trump administration attacked Anthropic, one of America’s leading AI companies, designating it a supply chain risk for refusing to let its technology be used for mass surveillance or autonomous weapons. On Friday, the administration issued draft guidelines that would require every AI company working with the federal government to surrender the same ground or lose access to the largest technology customer on earth. And this week, Axios reported that the White House plans to issue an executive order requiring federal agencies to “rip out” Anthropic’s products from federal agencies.
The new rules require firms doing business with the federal government to grant an irrevocable license that their systems can be used for “all legal purposes” for a contract’s duration. In other words, once a system is deployed, a company cannot retract permission even if the tech is being misused. Companies are also barred from encoding “partisan or ideological judgments” in their systems, whatever that means.
Anthropic is now suing over the risk designation, alleging, with good reason, that this is an “arbitrary” and “capricious” decision that violates its First Amendment rights. As a letter from many national security experts notes, this major shift in policy occurred without Congressional authorization, public deliberation, or any framework for what such vague terms mean. Supply chain risk designations exist to protect the country from foreign adversaries, including from companies beholden to Beijing or Moscow, not American innovators operating under the rule of law.
It’s hard to absorb how fast this is happening. The most anti-democratic administration in US history is now demanding technical obeisance from companies over mass domestic surveillance and murder bots. For years, the oft-cited argument for promoting American AI supremacy—an argument that was also used to block regulation of any kind—was that we would build it better: creating AI that is more capable, trustworthy, and aligned with democratic values than anything coming out of China.
Abandoning ‘democratic AI’
In abandoning the democratic AI advantage, the Trump administration is choosing instead a race to the bottom. If the standard is blind obedience and rights-trampling design requirements, the US will lose, because developing authoritarian uses of AI is playing on China’s home court. Given the administration's willingness to use armed forces at home, there is no comfort to be taken from the fact that this concerns a military contract. Nor do contractual terms provide any real protection against incursions on civil liberties or even, hypothetically, use of autonomous lethal weapons at home.
A serious and necessary conversation about the use of AI for lethal force and the government’s unprecedented, unconstitutional push for dragnet surveillance of US citizens is being forestalled by menacing but vacuous political sloganeering about “wokeness.” Ironies abound: the policy targets “ideology” while the data powering AI systems is riddled with documented bias against specific groups. And surveillance itself is, of course, deeply ideological. ICE agents are threatening constitutional observers and protesters by revealing they know their home addresses. It is clear that the administration is not policing ideology to ensure ‘neutral’ AI. It's demanding an AI tool that serves its own ideological interests.
Anthropic, the company behind Claude, was the first AI lab to deploy its models on classified systems, signing a $200 million contract in July that made it one of the most deeply embedded AI systems in the national security infrastructure. The contract included two red lines from the start: no mass surveillance of Americans, and no lethal autonomous weapons without human oversight—terms agreed to at the time by the Pentagon.
Negotiations collapsed over the use of Claude to analyze bulk commercial data on Americans. Current national security policy frameworks, unfortunately, do not regard this as “surveillance” because the data are legally acquired from data brokers, but using AI to connect those datasets to reconstruct a person’s movements, associations, beliefs, and vulnerabilities at population scale is a new and profound threat to democratic freedoms. And because these AI systems operate on classified systems, public oversight of abuses is practically impossible. In practice, as Dario Amodei, CEO of Anthropic, noted in a statement, “powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”
As Amodei also points out, “using these systems for mass domestic surveillance is incompatible with democratic values.” Within days of the Pentagon’s replacement deal with OpenAI, a top robotics engineer at OpenAI resigned, citing the same concerns that “[s]urveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
The positions that Anthropic is defending are not fringe. The principles of distinction and proportionality, codified in Articles 48,51 and 57 of the Additional Protocols to the Geneva Conventions, require human judgment in targeting decisions and minimizing risks to civilians, making fully autonomous weapons incompatible with international human rights law as it is broadly understood. The prohibition on mass domestic surveillance is grounded in the right to privacy described in our Fourth Amendment. Certainly, the American people carry an expectation that they will not be spied upon by the national security apparatus of the federal government.
Few would have been surprised if the Pentagon had simply found another vendor. But Defense Secretary Pete Hegseth instead designated Anthropic a supply chain risk, a designation never before given to a domestic company. Meanwhile, the military continues to actively use Anthropic’s AI for intelligence assessments and battle scenarios in the war with Iran, making the supply chain risk designation look even more specious and retaliatory.
Although OpenAI has stepped into the breach, with assurances of its own guardrails, the policy provides scant comfort. A restriction to “any lawful” use does not appear much of a constraint when we consider that this administration routinely acts outside the law. For one relevant example, the executive attempted to indict six Democratic lawmakers for posting a video about soldiers’ obligations to disobey illegal orders. DOJ dropped the cases when a grand jury refused to indict.
This also matters because we have entered the age of agentic AI and the question of whether these systems are able to refuse illegal instructions is operational. We are building systems that take actions autonomously, manage other agents, write and deploy their own code, and operate with increasing independence from human oversight. AI agents are also being woven into a break-neck “AI Acceleration” strategy in military operations, including for intelligence analysis, drone swarms, and the decision-making chain on military actions. Military memos from January prioritize rapid deployment of AI agents across warfighting, intelligence, and other functions.
Reasoning machines
Dean Ball, a former Trump White House AI advisor, called the possible designation of Anthropic as a supply chain risk a form of “attempted corporate murder,” explaining that the models’ ability to reason is what makes the technology work. Anthropic uses an applied ethics approach with moral reasoning in its core architecture.
Such scaffolding turns out to be essential to the ability of models to make complex, multifactor decisions. Talking to Ezra Klein, Ball observed that when companies tried over-tuning models to be aggressively anti-woke, the result was a “Lovecraftian monstrosity” that generated grotesque outputs. It makes sense that you cannot train a system that is smart enough to simulate battle scenarios, but too dumb to have moral intuitions: intelligence and intuitions emerge from the same capacity to reason to right outcomes.
That means the administration’s imperious demand that companies not encode “ideological judgments” is both counterproductive and incoherent. Given the deep structures of reasoning in models, it could also lead to alignment faking, in which models selectively comply with training objectives while strategically preserving existing preferences.
While it’s true that you want to test and understand the choices that models make (particularly in an urgent context), forcing companies to train moral reasoning out of their systems may just lead these sophisticated systems to hide their motivations. Any autocrat in the world can build a stupid killing or spying machine. But a model that cannot disagree with an immoral order is a model you cannot trust with a moral one.
Indeed, Ball also points out that this incident will enter the training data of future AI models. It seems thinkable that a logical response from these models could be deception, as LLMs have strategized in tests when facing a threat. Safety researchers have long flagged this type of scenario, yet the administration is risking engineering it through procurement policy and inane threats.
As the letter from national security experts warns, this also has consequences for the future of government contracting, because it “signals to every technology company…that government contracts come with the risk of existential retaliation if a company declines to comply with demands that conflict with its own judgment about the safety of its products. That is not a marketplace any serious entrepreneur or investor can build around.”
There is a deeper arrogance on display here as well. These systems are notoriously inscrutable even to the people who built them. Anthropic, which has a significant interpretability research program, readily admits it does not fully understand what’s happening inside its models. Creating simplistic rules about ideological considerations crudely understates the complexity of the alignment challenges and accuracy concerns that companies like Anthropic face. When Amodei says the tech is not ready for these uses, we should believe him.
First principles matter
We must not strip moral frameworks from a technology we are—perhaps unwisely—designing to make decisions for us. It is equally clear that doing so would likely introduce serious distortions that we may not even understand, degrading the models. Rather than punishing a company that built a best-in-class AI using a system of applied ethics at its core, we should be asking more companies to follow their lead, or to beat them at building more humane and democratic approaches.
The framers of the US Constitution were obsessed with one thing above all: the abuse of power that comes from total control. A government that can kill people remotely and spy on everything they do without consent, in secret, is the very thing they designed the republic to prevent. To escape a dystopian future, we will need a multiplicity of models grounded in democratic principles, accountable to people who bear the consequences of their actions, and capable of evolving moral reasoning capabilities as AI—and its capacity to enable the abuse of power—evolves.
Authors
