Home

Donate
Analysis

Anthropic-Pentagon Dispute Reverberates in European Capitals

James Ball / Mar 19, 2026

James Ball is a fellow at Tech Policy Press.

Defense Secretary Pete Hegseth follows President Donald Trump to board Air Force One, Wednesday, March 18, 2026, at Joint Base Andrews, Md. (AP Photo/Julia Demaree Nikhinson)

The ongoing clash between Anthropic and the United States Department of Defense is playing out less like a procurement dispute between client and supplier and more like a high stakes melodrama, not least because of the Secretary of Defense Pete Hegseth’s decision to designate Anthropic a “supply chain risk” and to take an expansive view of what such a designation entails.

The DoD position—that the designation prohibits companies with defense contracts dealing with Anthropic altogether, rather than just from using its tech on federal government contracts—has elevated the dispute into an existential battle for Anthropic. Unsurprisingly, Anthropic is seeking to overturn the designation in court.

At the core of the dispute are two assurances that Anthropic sought from the Department: that its technology would not be used for “mass domestic surveillance” of US citizens, and that its technology would not be deployed in autonomous weapons. The DoD asserts Anthropic should be content with an assurance its technology would only be used for “any lawful purpose.”

Much of the coverage of the argument has framed it as a domestic political row and as a clash between the ethical principles of Anthropic’s founders, including CEO Dario Amodei, and the “warfighter” ethos of Secretary Pete Hegseth. However, in practice, the situation is being closely watched in capitals far beyond Washington—not least as it has significant policy and legal implications for Big Tech across the world when it comes to both surveillance and AI-powered weaponry.

Domestic mass surveillance

Amodei’s reference to “domestic mass surveillance” invokes a historical precursor to the current row: the revelations in documents leaked by National Security Agency whistleblower Edward Snowden in 2013. The documents shed light on the scale of mass surveillance at the agency, both within the US and across the world, including on allied nations and their leaders—and the heavy involvement of US Big Tech companies in both.

The political fallout of the Snowden revelations was largely concentrated in the US, where it led to a relatively modest overhaul of certain laws regulating surveillance. Internationally, though, the disclosure of US mass surveillance galvanized efforts in the European Union to regulate Big Tech and establish data protection mechanisms.

Most directly, this included proposals around “data sovereignty”—the idea that data on EU citizens should be stored and processed within the EU—but it also contributed to much broader efforts to tackle tech power that eventually helped to produce the Digital Markets Act (DMA) and Digital Services Act (DSA).

The burden of compliance with these measures all fell on Big Tech companies, and not on the US state. Meta estimated compliance with the DMA alone required 590,000 engineer hours, while the Center for Strategic and International Studies estimated compliance and operational costs to Big Tech of the DMA and DSA between $22 to $50 billion.

Engaging in global mass surveillance was a policy choice of successive US administrations—but the costs of the international backlash from policymakers were borne by Big Tech. Those same regulators and legislators will almost certainly have noticed Amodei’s statement only raised concerns about domestic surveillance with interest. What starts in America rarely stays there.

Most of the attention from policy watchers internationally, though, has so far centered on the other aspect of the argument—that of AI’s role in autonomous weaponry, not least because the ongoing war with Iran serves to focus minds across the planet. The dispute raises the urgency of two intertwined debates at once: what is the legality of the use of these systems, and how should policymakers respond to their deployment?

Lethal autonomous weapons

In 2013, a group of around two dozen experts published the Tallinn Manual, an attempt to codify how existing international law should be applied to cyber-warfare, which has formed much of the basis for subsequent debates in that area.

Today, there is an ongoing effort to produce a similar manual guiding the position of international law on AI-guided weaponry—but it hasn’t even been written yet. Tsvetelina van Benthem, lecturer in international law at the University of Oxford, is one of the experts collaborating on that project, and is a member of the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems.

“First of all, it's not just an ethical issue,” she explains. “Second, it's not only a contractual issue between Anthropic and the US Defense Department. It really raises questions about the legality under international and domestic law of the use of such systems.”

It is easy for Hegseth to promise that AI will only be used for “any lawful use.” It is far less easy, van Benthem notes, for anyone to say what that actually means. Even absent specific treaties, the US is bound by customary international law, and that is incorporated into its common law, so on paper at least neither Anthropic nor the Department of Defense can shrug off international law obligations.

In practice, things might well be different. There is still little in the way of consensus as to what actually constitutes an autonomous weapon system, nor what constitutes meaningful human involvement in a final targeting decision—would an outsourced worker engaging in a box ticking exercise be enough, for example?

Similarly, while international law might in theory require the US to demonstrate it had taken due care in procurement to ensure its systems were safe and reliable for use in warfare—especially given Amodei has said AI systems do not yet meet this standard for targeting—taking action against the US government may be a non-starter. “Where would you take the US to account for that?” van Benthem asks. The legal forum simply does not exist.

AI companies themselves, though, might be a much easier target for litigation. “It might be that even though they could not be held liable for a strike itself, it might be that there will still be a due diligence liability for the very procurement of a system that is incapable of complying with those requirements,” she notes.

An AI company whose software was potentially involved in a fatal strike might find itself the subject of civil and criminal litigation in domestic courts across the world, as well as face potential action under international law. Their international presence necessarily limits the level of indemnity the US can offer—and they lack the legal and political protections that come along with being the world’s leading superpower.

Breaking down the silos

Experts agree that the combination of the DoD/Anthropic clash and the US and Israel’s joint strikes on Iran have turned a years-long theoretical debate on law and policy into an urgent one. Even the EU’s AI Act—probably the most expansive AI regulation in existence—explicitly excludes military applications of AI, which is already starting to look like an anachronism.

“I think that maybe we are seeing a breaking down of these silos,” says Ingvild Bode, Director of the Center for War Studies at the University of Southern Denmark and an expert member of the Global Commission on Responsible AI in the Military Domain.

She notes that all LLM foundation models are essentially dual-use technologies, meaning that trying to have conversations about civilian and military AI regulation in parallel is likely a futile exercise.

“We’re realizing that we have to have these conversations in unison,” she says. “Because it's the same companies. I see more momentum around that conversation.”

Bode believes we will have to reckon with assumptions built into international law written long before the idea of “algorithms making the decisions that affect taking human life” was even considered possible. The Geneva Protocols, she notes, rely on a human being accountable for decisions about lethal force in warfare, and do not function without it. But this is an implicit requirement for the protocols to function. Nowhere is it actually stated.

The norm setter

Now, things like that could become a problem. If nothing else, the dispute has focused minds that the new rules have not been written, and if they aren’t codified soon, the US may get to set the de facto precedents which others then have to build upon.

“I think it really has drawn attention to the US as the norm setter, and the norms are set at an extremely low level,” says Elke Schwarz, Professor of Political Theory at Queen Mary University of London and Vice-Chair of the International Committee for Robot Arms Control. “We don't necessarily have in the EU context or in the international context, a robust set of rules and regulations that apply for the defense domain.”

Schwarz believes the Anthropic/DoD dispute is likely the result of a misunderstanding or miscommunication between the two parties on what was probably intended as a contractual formality rather than a deliberate public showdown. But it’s one that’s moved major policy debates from the staid pace of academic discussion into the forefront of minds of legislators across the world.

“I think it is that legal, kind of like a, cover-your-ass moment which they've come to regret,” she says. “And then I think that's paired with an intra-industry spat between Anthropic and OpenAI, and then there's clashing personalities, and all that weird, strange Silicon Valley swamp that is associated with that industry.”

On the policy front, Schwarz suggests two directions in which European governments are likely to go—possibly doing both at once. “Either everybody is going to just invest in, probably US-focused large language models for the defense purposes, because nobody wants to fall behind,” she suggests, believing the argument could spur on a race to the bottom. “Or else it does galvanize, at least in the European context, much more urgency for multilateral efforts to put some meat on the bones of these regulations?”

Marc De Vos, Co-CEO of the Brussels-based think tank Itinera and professor at the Ghent University Law School, says the Anthropic crisis is concentrating the minds of legislators across Europe—and for the need for tech sovereignty in the military sphere.

“It puts a very stark light on technological dependency and the need for some form of sovereignty, autonomy and control in Europe … And this is, I think, very unhelpful for the American tech companies,” he suggests. “It just accelerates the sense of urgency about what we would do with AI now that it's rushing headlong in a certain direction.”

Fundamentally, policymakers in Europe are faced with a US government that is no longer willing to be subtle in how it exercises the soft power granted to it by the global dominance of US Big Tech corporations. While some might wish to respond through regulation and updating international law for an era of autonomous weaponry, others will look to building an independent technology and defense sector.

“Trump would actually like the American tech companies to be able to control what other countries can do in warfare, as long as they can't control the US government in warfare,” he concludes.

“At the end of the day, if you're outside the US and you're a government, the bottom line is very clear. You have to control your own technology. Otherwise, you're going to be dependent on both the US government and US tech.”

Authors

James Ball
James Ball is a journalist and author who has written on technology and politics for 15 years. Over that time, he has worked on major international scoops including the Snowden disclosures, Panama Papers and offshore leaks, and Chelsea Manning’s document releases through WikiLeaks. He is a PhD candi...

Related

Analysis
A Timeline of the Anthropic-Pentagon DisputeFebruary 25, 2026
Podcast
How to Think About the Anthropic-Pentagon DisputeFebruary 28, 2026
Perspective
Why Congress Should Step Into the Anthropic-Pentagon DisputeFebruary 26, 2026

Topics