Key Questions on the Role of Technology in the Expanding Middle East War
Justin Hendrix, Ramsha Jahangir, Ben Lennett / Mar 5, 2026
A young Iranian boy looks at the sky while standing on the ruins of a diplomatic police station that is completely destroyed in US-Israeli attacks in Tehran, Iran, on March 4, 2026. (Photo by Morteza Nikoubazl/NurPhoto via AP)
In the days since the first US and Israeli strikes on Iran, the war has expanded across the Middle East, drawing more countries into conflict and threatening to further destabilize the region and the geopolitical order. It has also brought to the fore questions about the role of technology in armed conflict, including the controversial use of new artificial intelligence technologies.
Three fault lines have emerged with particular urgency: the battlefield integration of AI targeting systems and the accountability gaps they expose; the collapse of information guardrails at precisely the moment they are most needed; and the deepening privatization of warfare, as Silicon Valley companies become essential infrastructure for military operations. A fourth, quieter thread runs beneath all of them — the way surveillance infrastructure built to control citizens can be turned against the states that built it.
Tech Policy Press asked experts working at the intersection of technology policy, security, and international affairs to share what they are watching as the situation unfolds.
What follows are their responses, lightly edited for clarity and brevity. This piece may be updated with additional responses.
AI targeting and accountability
Heidy Khlaaf
Chief AI Scientist, AI Now Institute
We are in the dangerous territory where Large Language Models (LLMs) and generative AI are being normalized as a valid technology to be instrumented within both AI-Decision Support Systems (DSS) and Lethal Autonomous Weapons Systems (LAWS) for targeting purposes. Yet current framing regarding Anthropic’s and OpenAI’s negotiations with the US’s Department of War instead risks overindexing on myopic interpretations of human oversight, or a particular companies' so-called 'red lines', papering over what should be the real target of our scrutiny: that generative AI algorithms are a flawed and inaccurate technology that fabricate and "hallucinate" outputs, often at a rate of 50% accuracy, where they're unlikely to be able to solve tasks outside of their data distribution and training data sets.
Generative AI’s inability to handle novel scenarios that would arise from the fog of war thus raises serious questions about whether they can be successful in military settings. Furthermore, these “hallucinations” are an inherent property of these models given their probabilistic nature, with model providers stating that these issues are to persist.
Letting Anthropic's safety theater drive the discourse additionally overlooks what is ultimately a superficial distinction between AI-DSS and LAWS in practice. The difference often separating AI-DSS, which is how they're currently being used, and AI-driven AWS, is a human in the loop that is likely to be impacted by automation bias. If Anthropic itself admits that its models are too unreliable for LAWS, that would imply they are not reliable for AI-DSS either, nor the autonomous drone swarming technologies they’ve put proposals forward for.
Regardless of oversight levels, current LLM safety is far from the reliability and accuracy measures that have long been a prerequisite for defense and safety-critical systems. Deploying them for either AI-DSS or AWS may ultimately lead to conflicts becoming indiscriminate lethal campaigns.
Steven Feldstein
Senior Fellow, Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace
The clash between Anthropic and the Pentagon over the military’s use of the company’s technology presaged an important shift: more than ever, AI technology has become central to warfighting. Indeed, reports have emerged about just how central a role Anthropic’s tool was playing in the unfolding war. In the conflict’s first 24 hours, US forces struck 1,000 targets. In the ensuing days, the Pentagon kept up a ferocious pace of bombing. None of this would have been possible without AI technology. A cutting-edge platform called Maven—overseen by Palantir and powered by Claude—generated “hundreds of targets, issued precise location coordinates, and prioritized those targets,” reported the Washington Post. It was “speeding the pace of the campaign, reducing Iran’s ability to counterstrike and turning weeks-long battle planning into real-time operations.”
These operations underscore three crucial advantages offered by AI systems: scale, speed, and efficiency. But the growing reliance on military AI is not without risk. One of the lessons learned from the use of AI targeting systems in conflicts such as Ukraine and Gaza is that they are prone to make mistakes with mounting reports about civilian harm.
In the coming days, I will be looking at the following issues:
- How accurate are the Iran target lists especially after high priority targets have been expended? Do second and third-tier targets represent legitimate military objectives and is due attention being paid to preventing collateral civilian harm? As the data analyzed by Claude becomes noisier and susceptible to distortion (the AI slop problem), how is the model compensating for potentially lower accuracy or limited verifiability?
- What type of oversight is the US military exercising over AI-generated target lists? Given the unprecedented speed in which targets are being produced and then struck, are target verification procedures holding up sufficiently? How does the Pentagon’s legal review process to ensure compliance with the laws of armed conflict interface with the model?
- When it comes to after action reviews of strikes, how are these being conducted? Reporting indicates that AI models are also evaluating strikes after they have been carried out; is it appropriate for these tools to conduct self-assessments regarding lethal strikes?
Emile Ayoub and Amos Toh
Senior Counsels, Liberty and National Security Program, Brennan Center for Justice
The focus on Anthropic’s dispute with the Pentagon should not distract from the crisis hiding in plain sight: Congress’s failure to regulate some of the riskiest uses of AI — namely, amplifying surveillance and automating the use of lethal force.
Anthropic is right to worry that the Department of Defense will use its AI platform, Claude, to conduct large-scale collection and analysis of Americans’ sensitive data. Military intelligence agencies have bypassed constitutional and statutory protections to buy up Americans’ data, including location and browsing history, from commercial data brokers without a warrant. AI tools supercharge the government’s ability to collect and analyze this information. They enable the government to gather and piece together data to expose a person’s movements, associations, and habits at scale — undermining a central aim of the Fourth Amendment “to place obstacles in the way of a too permeating police surveillance.” It is long overdue for Congress to pass legislation, like the Fourth Amendment Is Not For Sale Act, that would close this “data broker loophole” and safeguard privacy and civil liberties.
The dispute over the use of Claude in autonomous weapons also lays bare the dangers of lethal targeting without sufficient human oversight. The laws of war require the military to distinguish between combatants and civilians, and refrain from attacks that cause excessive civilian harm. These determinations are often context specific and may require judgment that AI is ill-equipped to exercise.
The Defense Department’s directive on autonomous weapons raises more questions than it answers about how the military addresses these risks. It requires senior Pentagon leaders to review whether autonomous weapons enable “appropriate levels of human judgment over the use of force.” But it’s unclear how this standard is satisfied when the weapon leaves no room for commanders or operators to override technical blind spots in life-and-death decisions. Congress should urgently impose safeguards to align autonomous weapons with the laws of war — and restrict the use of weapons that fall short.
Lauren Kahn
Senior Research Analyst, Center for Security and Emerging Technology (CSET) at Georgetown University
The first wave of American attacks during Operation Epic Fury saw the operational debut of LUCAS, a low-cost unmanned combat attack system, the first precise mass system fielded by the United States military. This long-range, one-way loitering munition was reverse-engineered from the Iranian Shahed-136, developed in 18 months, and integrated into CENTCOM in December 2025, just five months later. This pace was much faster than the Pentagon’s usual technology adoption timelines.
LUCAS platforms are modular, have a long range of up to 2,000km, and can deliver a 200kg payload. They also feature some degree of autonomy, anti-jamming, and swarming abilities. The systems can be launched using catapults, rocket-assisted takeoff, or from mobile platforms (including ships), and have already been undergoing testing and experimentation by the Marine Corps, Army, and Navy.
For years, the United States has relied on advanced, expensive platforms like the F-35 and B-21 bomber, and weapons like the Long-Range Anti-Ship Missile to use force. LUCAS is a break from this approach. Although often called a drone, it shows how the term now covers one-way attack systems used as cruise missiles, not just a MQ-9 Reaper or the tactical FPV drones in Ukraine. LUCAS offers a more economical way to deliver precision effects, at $35,000 a unit. This shift matters because inventories of high-end munitions, such as Tomahawk cruise missiles, which cost the Pentagon about $2 million each, are fairly easily strained by sustained operations and can take years to replenish.
While we do not yet know how many LUCAS systems have been used or how successful they have been, if LUCAS proves effective, especially in future strikes on Iran, the Pentagon could significantly increase production and reliance, and signal a wider shift by the US military toward scalable, cost-effective options for the precise mass era.
The information environment with guardrails off
Melanie Smith and Bret Schafer
Senior Directors of Information Operations, Institute for Strategic Dialogue
The spiraling conflict in the Middle East is the first large-scale, global conflict to test platform responses in an era when most of them have reduced their trust and safety teams and degraded their ability to fact-check or add context to war propaganda. While it is nearly impossible to quantify how those decisions have impacted the visibility and reach of false and misleading content about the war, there is a sense that, on at least some platforms, the guardrails are completely off.
On X, for example, Iranian state-sponsored propaganda—including obvious instances of state-backed media outlets promoting AI-generated images alleging destruction of US facilities—is not only spreading without labels or community notes but is being served to some users in their “for you” feeds. In a chaotic breaking news environment, we can’t expect platforms to be able to respond to everything, but the very intentional decision to remove labels identifying state-sponsored media outlets means that audiences encountering this content are doing so without any contextual clues. Coupled with the ubiquity of AI-generated content across all platforms, the information environment feels demonstrably worse than it did during Russia’s full-scale invasion of Ukraine, when there was at least a sense that the platforms were trying.
Another key dynamic ISD will be monitoring closely over the coming days is how conflict conditions, particularly disruptions to information access, can intensify the effects of information manipulation.
Internet shutdowns have historically been used by authoritarian regimes during crises to limit the flow of information domestically. In practice, these shutdowns significantly alter the information environment in two important ways.
First, domestically, connectivity disruptions restrict citizens’ ability to access credible reporting, communicate with one another, and document events in real time. Without reliable connectivity, eyewitness reporting and independent verification become significantly more difficult. This constrains the ability of journalists, civil society actors, and international observers to build an accurate picture of events on the ground. Second, internationally, information shutdowns create a vacuum that can be rapidly filled by propaganda, speculation, and coordinated disinformation. In the absence of verifiable information, manipulated content and false narratives can circulate widely with fewer credible counterpoints.
Privatization of warfare
Brett Solomon
Senior Research Fellow, Human Rights Center at the University of California, Berkeley School of Law
Betsy Popken
Executive Director, Human Rights Center, at the University of California, Berkeley School of Law
The war in Iran and the wider Middle East is full-stack evidence that the contemporary battlefield is increasingly privatized. Whether in predictive targeting, situational awareness, information warfare, or related domains, many of these emerging warfighting capabilities are increasingly developed, operated, and supplied by the private sector.
And yet, who holds private companies accountable when the very states meant to regulate them are also hiring them as subcontractors? Today’s generation of venture-backed defense tech firms is, unlike traditional arms contractors, embedded deeply and continuously in the battlefield in real time – integrated from the software layer upward.
Take the role of AI, which now plays a central role in the war in Iran. Anthropic may have recently red-lined the Pentagon’s use of its systems for autonomous weapons and the mass surveillance of Americans, but its technology has effectively empowered an illegal attack on the sovereign nation of Iran, ultimately leading to the overthrow and death of its Supreme Leader.
And it’s not the first time this year that this San Francisco-based company has been used in an operation that illegally overthrew a foreign leader: its intelligence analysis, planning, and decision-support tasks were reportedly part of Trump’s arsenal in the Maduro overthrow.
The privatization of warfare extends beyond software or the algorithm. No doubt if there are boots on the ground in Iran, troops may soon be using Meta + Anduril wearables to enhance soldier perception and decision-making on the streets of Tehran. These companies form and will continue to create the very exoskeletons of the modern military.
Proprietary tech also serves as the backbone of civilian communications when states intentionally shut down access to the internet during a war, with Elon Musk’s Starlink feeding as many as 50,000 terminals in Iran. In Sudan, Starlink previously shut down its services in parts of Darfur upon orders from the government, depriving Darfuris of the internet, and showing the power a private company has over civilians' access to information in a warzone.
This war may be fought by states, but it will be decided by those with access to the most advanced corporate technologies. In the end, the greatest winners will likely be Silicon Valley companies themselves, whose primary motivation is expanding into new markets and generating profit — not promoting peace.
Surveillance infrastructure and vulnerabilities
Azadeh Akbari
Professor of Critical Data & Surveillance Studies at the Center for Critical Computational Studies, Goethe University Frankfurt and Founder and Director of the Surveillance in the Majority World Research Network
Tehran changed irrevocably after 2009. Following a year of mass protests against the disputed presidential election that returned Mahmoud Ahmadinejad to power, CCTV cameras proliferated across the capital. Surveillance systems rapidly expanded into universities, schools, kindergartens, cafés, and restaurants. Business owners were permitted to operate only if they granted security forces access to their footage. Urban space was folded into an integrated architecture of monitoring and control.
That same year, without the knowledge of Iranian authorities, the Stuxnet worm infiltrated the Natanz nuclear facilities, marking the first confirmed instance of a cyberweapon causing physical destruction of critical infrastructure. It was a watershed moment: digital code had crossed decisively into kinetic effect.
On March 2, the Financial Times reported that Tehran’s traffic cameras had been compromised for years by Israeli intelligence—by the same unit that had run Stuxnet. Detailed knowledge of the movements of Ali Khamenei reportedly enabled a targeted strike at his residence. If accurate, this represents a striking inversion: surveillance systems built for internal repression repurposed for external attack.
Since beginning my doctoral research in 2016, I have documented the systematic misuse of CCTV technologies against Iranian women in the enforcement of compulsory hijab. Over time, these systems have grown more sophisticated, shifting between German, Chinese and Russian providers. The same cameras used to identify unveiled women and track demonstrators were allegedly instrumental in locating the very leader who ordered their installation.
The lesson extends beyond Iran. Closed, nationally segmented digital systems do not automatically produce sovereignty or security. Infrastructure designed to control citizens can generate systemic vulnerabilities. A regime that constructs a surveillance society to entrench its power may ultimately find that the networks built to discipline society expose the state itself. One should also recall that Stuxnet did not remain contained; its code was repurposed and later used beyond its original target. Each time digital infrastructure is weaponized, a new front in cyberwar is opened. Tactics designed to eliminate enemies inevitably teach them new ones.
Authors


