Home

Donate
Analysis

Breaking Down Amicus Briefs in Anthropic’s Fight with the Pentagon

Cristiano Lima-Strong, Justin Hendrix / Mar 23, 2026

Anthropic CEO Dario Amodei at TechCrunch Disrupt in 2023. (TechCrunch)

The legal standoff between Anthropic and the Department of Defense is reaching a critical point, with a preliminary hearing set for Tuesday to consider whether to grant the company’s motion for a preliminary injunction against a Pentagon designation blocking use of its services.

The Pentagon earlier this month labeled the artificial intelligence company a “supply-chain risk,” an unprecedented designation against a United States company that bars department employees and contractors from using its products, like the popular AI assistant Claude.

Anthropic has sued to block the designation, calling it a retaliatory move that violated the company’s First Amendment and due process rights.

Ahead of the pivotal hearing, dozens of tech employees, former military officials, free speech advocates and legal scholars weighed in on the case by filing amicus briefs on Anthropic’s motion, overwhelmingly siding with the company in pushing back against the Pentagon.

Here is a break down of how they viewed the Trump administration’s move:

Former service members

A group of nearly two dozen high-ranking former US Military, Navy, Coast Guard and Air Force personnel filed a brief arguing against the Pentagon and calling for relief from the court.

  • The group, which included several former secretaries of the Navy and Air Force, argued that a “military grounded in the rule of law is weakened, not strengthened, by government actions that lack legal foundation,” and that designating an American company a security risk was an “extraordinary and unprecedented” step that required “firm grounding.”
  • The group said at stake in the case was the “misuse of powerful national security authorities by civilian political leadership” as “retribution against a private company that has displeased the leadership,” and said they were “gravely concerned” that the Pentagon had exceeded its authority by issuing a designation “in an unprecedented manner that appears disconnected from the purposes” outlined in relevant law.
  • The group argued that the Pentagon’s designation “risks the long-term viability of critical public-private partnerships” between it and technology companies, which could “detract from military readiness and operational safety by harming the military’s ability to equip U.S. servicemembers with the latest, most effective technology.”

Google and OpenAI employees

Nearly 50 Google and OpenAI staffers, writing in their personal capacities, filed a brief siding with Anthropic over the federal government.

  • This collection of engineers, technical staff, scientists and researchers wrote that the Pentagon acted “recklessly” by invoking the designation to “punish” Anthropic rather than simply canceling their contract and seeking out another AI company to partner with.
  • If allowed to stand, they wrote, the decision could harm US competitiveness on AI as it would “chill open deliberation in our field about the risks and benefits of today’s AI systems.”
  • The group said they supported Anthropic’s bid to draw “red lines” and build guardrails against the use of its technology to fuel mass surveillance or power autonomous weapons. “The best currently available AI systems cannot safely or reliably handle fully autonomous lethal targeting, and should not be available for domestic mass surveillance of the American people,” they wrote.

Free speech advocates

A coalition of groups that champion free expression and First Amendment rights filed in support of Anthropic. They included The Foundation for Individual Rights and Expression (FIRE), the Electronic Frontier Foundation (EFF) and the Cato Institute.

  • The brief excoriated the Pentagon’s designation as a “potentially ruinous sanction threatens not only Anthropic’s business but also that of its partners and customers,” and that if left unchecked “imposes a culture of coercion, complicity, and silence, in which the public understands that the government will use any means at its disposal to punish those who dare to disagree.”
  • “The Pentagon’s temper tantrum is a textbook violation of Anthropic’s First Amendment rights,” the groups wrote, because it requires the company to “make a trade on a core freedom of expression.” The groups added that the Defense Department has freely admitted its decision was retaliatory, and thus an attempt to coerce it into compliance.
  • The groups argued that Anthropic’s choices around the output of its Claude product are “expressive” and thus protected under the First Amendment. “Requiring Claude to express ideas that Anthropic does not wish to express is classic compelled speech, which lies in the heartland of First Amendment’s prohibitions,” they wrote.

Microsoft

Tech giant Microsoft, which is itself a major US government contractor and has a strategic partnership with Anthropic, sided with the company in calling for a “pause” on the designation.

  • Microsoft argued that a pause would “enable a more orderly transition and avoid disrupting the American military’s ongoing use of advanced AI,” and prevent itself and other tech companies from acting “immediately” to change arrangements for products or services offered to the Pentagon.
  • The company argued that the "unprecedented order” between the two parties would nevertheless have “broad negative ramifications for the entire technology sector and American business community,” since going forward all companies dealing with the Pentagon would be “forced to account for a new risk in their business planning.”
  • Microsoft argued the best outcome would be a “negotiated resolution” between the two parties that did not set a broader legal precedent. There is reason to believe a negotiated resolution is possible here,” the company wrote, pointing to the fact that the Pentagon was able to separately come to terms with OpenAI.

Law professor Alan Rozenshtein

The Minnesota law professor and Lawfare editor, whose analyses of the spat between Pentagon and Anthropic has been widely cited, filed a brief in favor of Anthropic.

  • Rozenshtein argued that Hegseth’s order “far exceeds his limited statutory authority” to restrict the Defense Department’s contracting, because Congress intended for it to be limited to prevent adversaries that might “sabotage, maliciously introduce unwanted function, or otherwise subvert” US government systems. “That threat model bears no resemblance to the circumstances of this case,” he wrote.
  • He argued federal law “forecloses” the Pentagon’s designation, including by largely targeting threats operating in secret against US interests, not vendors operating in “good faith” negotiations with the government.
  • The law governing these decisions was intended to “confront a fundamentally different problem, and stretching those authorities to cover a dispute with a domestic software provider would exceed their text and purpose,” he wrote.

Apple-linked trade group

ACT | The App Association — a tech industry trade group that was reported to be primarily (and quietly) funded by Applefiled in support of Anthropic’s push for an injunction.

  • The group, which purported in the filing to represent “software developers” particularly in the small business community, argued that the designation would impose a “heavy burden” on “small businesses that supply software to the government.”
  • The group wrote that the “cryptic scope and dictates of the ban imposed by the government are impossible for small businesses to ascertain and adjust for, leaving them exposed to the whims of whichever government agency or official may later interpret it.”

Digital and civil rights groups

Abolitionist Law Center, Access Now, the Center for Constitutional Rights and Tech Justice Law Justice Law Project filed a brief in support of neither party.

  • The group’s argued that the dispute should be considered “within the broader unlawfulness of the parties’ collaboration,” and that Pentagon’s use of Claude is “illegal under both U.S. and international law protecting civilians during warfare because it does not allow humans adequate time to evaluate the lawfulness of the targets it selects.”
  • Even if militarized AI is not fully autonomous, the groups wrote, it poses catastrophic and irreversible human rights risks.” The use of these tools in war “enhances a military’s capacity to deliver maximum death and destruction at a speed and scale beyond human capabilities” beyond any critical or legal analysis, thus “severely” threatening “compliance with domestic or international law and human rights obligations.”
  • The groups accused the Pentagon and Anthropic of “already jointly committing war crimes,” including through US strikes as part of the war in Iran.

Moral theologians and ethicists

A group of academics describing themselves as Catholic moral theologians and ethicists filed a brief fully in support of Anthropic.

  • They wrote that the cause touched on an area of “longstanding concern”: that “when technology is capable of violating life, dignity, and freedom, it is reasonable to draw clear boundaries around its use,” and that those boundaries reflect “caution, not defiance.”
  • The academics argued that the Catholic Church’s “moral vision offers support for Anthropic's particular stand” against the Pentagon when it came to the issues of mass surveillance and autonomous weapons, for which they “applaud Anthropic for its principled ethical stance on AI use by the Department.”
  • They wrote that while privacy is not an “absolute right” in Catholic teaching, mass surveillance by the Pentagon would clearly overstep the concept outlined in Catholic thought, and noted that the Vatican “has consistently and strongly spoken out against autonomous weapons” since at least 2013.

Federal worker union

The American Federation of Government Employees, the largest union for civil employees of its kind, filed a brief in support of Anthropic’s push for an injunction.

  • The government employee union argues that “This case presents the latest chapter in the Trump administration’s concerted campaign to wield the power of the executive branch against its purported enemies,” part of a “long-running, concerted campaign to abuse the power of the executive branch to punish and suppress political dissent and opposition and coerce submission to the Trump administration’s preferred views.”
  • They say the administration’s “far-ranging and far-flung invocation of ‘national security’ as justification to retaliate against labor unions, law firms, individuals, universities, and (now) AI companies that are unwilling to bend to its every demand” are the type of abuse that the Supreme Court has warned against. The brief cites other examples where the administration’s appeal to “national security” justifications for its efforts were “hollow.”
  • To allow the government’s actions to remain in effect “would threaten free and unrestrained dialogue among not only Anthropic officials and employees but among all those working in the sector at this critical moment of rapid AI development and deployment.”

Foundation for American Innovation, et al.

The parties joining this brief in support of Anthropic — including former Trump administration official Dean Ball as well as Fifty Years, Fathom, ChinaTalk, Martin Chorzempa, Alex Imas, Pangram, Institute for Progress, Dwarkesh Patel, Roots of Progress and Saif Khan — point out that there is substantial procedure involved in designating a supply chain risk.

  • “An order designating a corporation as a supply chain risk is a significant government intervention in the market,” and “Congress therefore intended to limit supply chain risk designations to circumstances where the Secretary had identified concrete present dangers with particularity.” Congress also mandated that designations should be “subject to judicial review to ensure the predicates and procedures it has mandated have been satisfied.”
  • But in this case, there are no recommendations that would appear to satisfy the need for records that the court could review, the brief argues. And there are various other requirements of the statute, such as around notification, that represent steps that may have been skipped.
  • The court should ensure that the requirements Congress put into law are met, since “Enforcing Congress’s predicates here would strengthen, not weaken, the long-run credibility of lawful government action in these markets.”

“Values-led” investors

This group of amici — which includes Freedom Economy Business Association, Candide Group, Investor Advocates for Social Justice, Mission Driven Finance, The Nathan Cummings Foundation, Inc., Omidyar Network LLC, Pluralize Capital and the individuals Howard Fischer and Thomas Haslett — argue that Hegseth’s order that all military contractors must cease doing business with Anthropic “has no authority in any statute or regulation.” (Note: Omidyar Network has provided grant funding to Tech Policy Press.)

  • The brief argues there is no evidence that “supports a conclusion that Anthropic, an American company whose services the Administration intends to keep using for at least the next six months,” is a supply chain risk.
  • Even if the government did have some substantial reason to take action, the brief argues, it “would still be unlawful because the Administration expressly took them to punish Anthropic for expressing disfavored views,” violating the company’s First Amendment rights.
  • Noting that the amici are “values-led investors,” the brief argues that “If the ability to contract with the federal government is limited to companies that align with a given Administration’s policy views, such investment becomes untenable.”

Former judges and the Democracy Defenders Fund

This group of 149 former judges — including prominent names such as former federal court judges Michael Luttig and Nancy Gertner — argue that the judiciary branch must not surrender its role in reviewing matters such as this to the executive branch.

  • “There is no ‘national security exception’ to ordinary principles of judicial review,” the brief argues.
  • Because the Department of Defense did not follow the rules laid out in statute, “the court can set aside its actions on that basis alone.”
  • “More fundamentally, as a practical matter, no one is trying to force the Department to contract with Anthropic,” the brief argues. The Defense Department can contract with anyone it might like, “But it cannot use Section 4713 to punish Anthropic in its dealings with the rest of the world — including other government agencies whose functions are
  • unrelated to national defense and private businesses.”

Center for Democracy and Technology and the American Civil Liberties Union

These two civil society organizations writing in favor of Anthropic are concerned chiefly with claims around mass domestic surveillance and “why, as a result, Anthropic’s push for strict limitations on the government’s use of AI is critical to protecting the public’s privacy interests.” (Note: one signatory, CDT’s Jake Laperruque, is a current fellow at Tech Policy Press.)

  • The amici are concerned that “massive commercial datasets allow agencies across the government to intrude into the most intimate details of people’s lives, and to do so absent any independent authorization or oversight.”
  • “In addition to data purchases, government agencies conduct still other types of large-scale data collection,” such as social media monitoring.
  • Since AI tools make sifting through such data much easier, and because there are not adequate legal protections to abuse, “Anthropic’s advocacy for transparency and safety in AI development, and its discussion of the risks of mass domestic surveillance, are critical contributions to the ongoing public debate about the relationship between AI and government power.”
  • The government “is not permitted to punish Anthropic for its advocacy about the dangers of AI-enabled surveillance,” and thus the court should issue a stay.

Tech trade groups

Tech industry associations — TechNet, Software & Information Industry Association, Computer & Communications Industry Association, and Information Technology Industry Council — filed their brief in support of Anthropic, arguing that the government’s actions put the entire tech sector at risk.

  • The administration’s actions “threaten the entire enterprise of federal procurement from the technology industry by disregarding the carefully constructed legal structure” that is required in dealing with federal contractors, the brief argues.
  • “If an executive branch agency may convert a contract dispute into a government-wide supply-chain risk designation, disregarding every procedural safeguard Congress prescribed, then the procurement framework Congress built protects no one.”
  • The government’s communications, the brief argues, are difficult to discern. “Hundreds if not thousands of companies are trying to parse the meaning of social media posts, inconsistent and shifting Administration statements, and vague directions.”
  • And the government does not have the right to punish Anthropic based on viewpoint. For instance, “if the government’s actions amount to compelling a contractor to alter the message embodied in an expressive product, like Claude, those actions raise serious compelled-speech concerns.”

Former senior national security government officials

This group of former officials, including former Director of National Intelligence Avril Haines, National Security Advisor Jake Sullivan and Acting Assistant Attorney General for National

Security Mary McCord, argues that the designation of Anthropic as a supply chain risk was pretextual.

  • Because there was no due process, “the Administration’s actions here are unlawful not just because they fail to meet the requirements of the supply chain risk statutes, but also because they violate fundamental constitutional guarantees and are thus ultra vires.”
  • “DoD has not even attempted to articulate a theory for why disabling Anthropic advances national security interests. This failure is fatal.”
  • “This was ‘sentence first—verdict afterwards’: a preordained sentence against Anthropic, with a veneer of process to follow thereafter.”
  • The designation could discourage other companies from working with the government if they felt they could not do so honestly. “Such a signal would discourage other companies from engaging openly with government agencies, raising safety concerns, or participating in policy debates related to AI governance.”
  • The situation puts the US at risk, particularly in the context of international competition with rivals such as China.

Authors

Cristiano Lima-Strong
Cristiano Lima-Strong is a Senior Editor at Tech Policy Press. Previously, he was a tech policy reporter and co-author of The Washington Post's Tech Brief newsletter, focusing on the intersection of tech, politics, and policy. Prior, he served as a tech policy reporter, breaking news reporter, and s...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Analysis
Anthropic-Pentagon Dispute Reverberates in European CapitalsMarch 19, 2026
Perspective
The Anthropic Pentagon Standoff and the Limits of Corporate EthicsMarch 12, 2026
Podcast
How to Think About the Anthropic-Pentagon DisputeFebruary 28, 2026

Topics