Home

Donate
Perspective

The 'Humanitarian Halo' When Tech Sells One Stack for Aid and War

Purvi Patel, Nicole Bennett / May 8, 2026

Camouflaged Starlink system during military exercises in the Chernihiv region, Ukraine, June 2023 (Photo by Maxym Marusenko/NurPhoto via AP)

After an earthquake or a wildfire, the modern disaster story has a familiar visual language: a satellite image, a heat map, a dashboard that promises clarity in chaos. Those tools can be genuinely useful. But the ability to effectively use these tools in crisis settings requires trust from affected populations that the technology is intended to and will be used to help them during a crisis, and that trust depends on how the tools are used in practice.

‘Dual-use’ refers to goods, software, and technologies that can serve both civilian and military applications. That framing can be too generous, because it treats dual-use like an accidental property of neutral tools. In practice, dual-use has become a strategy: build a general-purpose system for seeing, sorting, and acting on information; sell it everywhere, including to militaries; point to the humanitarian deployment when critics ask what kind of world you’re building.

But dual-use today isn’t just about whether a satellite or a drone could be repurposed. The harder problem is that the same companies increasingly build and sell the same core capabilities into humanitarian operations and into military and intelligence systems and then let the humanitarian use serve as a kind of reputational varnish. A company can describe its tools as enabling life-saving assistance in humanitarian deployments, even as credible defense reporting describes those tools analyzing battlefield data to identify targets and support strikes, because the technical core (integrating data streams and producing action-oriented outputs) travels easily across aid and war applications.

Take Palantir. The World Food Program announced its partnership with Palantir in 2019 as part of WFP’s push for digital transformation, aimed at improving real-time decision-making and operations. WFP’s argument was straightforward: better data integration can move food and cash assistance more efficiently, and that matters when people are hungry.

The controversy emerged just as quickly. The partnership faced significant criticism from human rights, humanitarian, and technology-for-good groups. Immediate opposition centered on protection issues and the optics of linking a UN relief agency with a company closely tied to intelligence and immigration enforcement. None of this proves that WFP’s logistics staff were secretly creating a targeting pipeline. The main point is simpler: once a humanitarian organization depends on a vendor whose core business also supports security agencies, the humanitarian side can’t ignore the rest of the portfolio.

And Palantir’s defense portfolio is not peripheral. In March 2026, Reuters reported that the US Department of Defense would designate Palantir’s Maven system as a “program of record,” essentially locking it in as a core military capability with stable, ongoing long-term funding. The same Reuters piece notes that Maven analyzes battlefield data to identify potential targets, and has supported thousands of targeted strikes in recent weeks. This is what the same stack looks like: the data-integration and decision-support logic that sounds like common sense in a supply chain becomes a force multiplier in a kill chain.

In recent testimony to the UK Parliament, a British surgeon who has worked in Gaza recounted being told by patients, including children, how after they had been initially injured by an aerial strike, drones flew in to locate and shoot them individually while they were lying on the ground injured. With this type of technology being deployed after an incident with widespread civilian casualties, how would civilian victims know in the future if drones were coming to survey the damage for humanitarian response and assistance, or to shoot them once they are already injured?

If this makes you uneasy, it should. Humanitarian work depends on trust. Not only that aid is delivered, but that aid organizations remain distinct from the parties to conflict. The Red Cross and Red Crescent movement’s principles-neutrality and independence in particular-exist because communities and armed actors make life-and-death judgments about who counts as a threat. When a humanitarian agency becomes legible as part of a broader security ecosystem (or when it looks that way from the outside), the risks fall on staff and civilians first.

The satellite economy shows the same pattern, only on a planetary scale. UNOSAT, the UN’s satellite analysis center, provides imagery analysis for humanitarian emergencies related to disasters and conflict—work that supports evidence-based response. At the same time, the commercial imagery market is deeply entwined with national security. The National Geospatial-Intelligence Agency announced in 2024 that it awarded Maxar a roughly $359 million Commercial GEOINT Access Portal contract. This is effectively an official reminder that the same imagery supply chain that helps map a flood also feeds intelligence workflows.

And these are not abstract entanglements. In March 2025, Reuters reported that the US government temporarily disabled Ukraine’s access to satellite imagery through Maxar on a US government platform (GEGD), with NGA confirming the action. The lesson here is that commercial visibility can be governed through chokepoints shared by states and vendors. If you build humanitarian verification on top of those chokepoints, you’ve also built a system where humanitarian sight can be throttled by geopolitical decisions.

Planet offers another clean illustration. Planet has a formal “Rapid Response” program tied to humanitarian disaster response, and its own terms describe the program as initiated by Planet and the Digital Humanitarian Network to help get Planet’s data “to the right people at the right time.” Meanwhile, Planet also markets directly into military and alliance contexts: in 2025 the company announced it had been selected for a NATO contract to deliver “advanced daily monitoring and intelligence capabilities.” Again: one company, one sensor infrastructure, two narratives, humanitarian responsiveness and alliance surveillance, running in parallel.

Starlink shows how quickly disaster connectivity can become a form of strategic leverage. SpaceX markets Starlink for emergency response, including sending kits to vetted organizations operating on the ground during disasters. But the Belfer Center’s case study on Starlink in Ukraine describes a dependency that exposed the fragility of outsourcing critical connectivity to a private actor and the politics that follow from that reliance, which on at least one occasion generated questions about whether availability of Starlink services had been manipulated by a private business provider to try to influence the outcome of a military offensive in progress. In such scenarios, use of such technology could shift from an essential asset to a frightening liability in an instant, and on the whims of people far removed from the front lines. If a satellite internet service can shape the outcome of a war and the credibility of a humanitarian response, then private infrastructure becomes about governance, and not just support. In many cases, this is about more than just leverage. In worst case scenarios, humanitarian need and channels of aid can be used as extortion for political gain. This is something we used to accuse Somali warlords of doing with food aid in the 90’s (although this analysis is much more nuanced when we look at more recent discussions of food aid diversion), but the digital version might have wider impact—Starlink's role in the Ukraine war is a prime example.

So what do we do with this, beyond resignation?

Humanitarian systems already know the danger of dependency. The Oslo Guidelines, which are longstanding guidance on the use of military and civil defense assets in humanitarian response, warn that UN humanitarian agencies should avoid becoming dependent on military resources and instead encourage investment in civilian capacity rather than ad hoc military support. The dual-use vendor problem is the updated version of that warning. Pragmatically, what does this mean in a reality where official actors, like the US government, may move back towards military units delivering humanitarian assistance while undermining the civilian infrastructure for aid delivery? What happens if (or perhaps “when”) the White House decides it is just easier to add a rider onto the contracts of companies contracted to the Department of Defense where the clause says they would also be available for humanitarian response, as a condition of their DoD contract? The dependency risk hasn’t vanished; it’s moved from helicopters and convoys into data platforms, satellite tasking, and proprietary analytics.

A serious response starts with refusing to let the humanitarian halo do the governance work.

Humanitarian agencies and their donors should require dual-use disclosure as a standard: if a company sells the same platform family for defense, intelligence, or domestic security, that must be disclosed up front, not uncovered through investigative journalism. Then procurement should include purpose limitations and non-reuse terms that truly protect: no downstream security use of beneficiary or operational data, clear retention limits, and independent audit rights. (If a vendor refuses to accept thatthe partnership isn’t genuine, it’s a capture.) The sector should also provide exit strategies: interoperable standards and migration capacity so that “we can’t leave” doesn’t quietly justify permanent dependence.

None of this denies that dual-use tools can be helpful. They can be. The question is whether we view humanitarian deployment as a moral excuse or as the ultimate standard for restriction. When the same technology quietly supports both the logistics of feeding people and the mechanics of targeting them, the focus isn’t the tool itself. It’s the governance behind it.

Authors

Purvi Patel
Purvi Patel is an attorney and public health professional in the international humanitarian and international development sectors, specializing in livelihoods, protection information management, refugee status determination, and emergency response. She is currently adjunct faculty at Wheaton College...
Nicole Bennett
Nicole Bennett is a scholar and practitioner working at the intersection of migration, data governance, and digital technologies, with a focus on how artificial intelligence and data-sharing systems reshape mobility. Her research examines the political economy and geography of data in refugee and as...

Related

The Democratic Deficit in AI Humanitarian Systems: Why Community Participation Can't WaitNovember 13, 2024
Perspective
Artificial Intelligence and the Orchestration of Palestinian Life and DeathAugust 12, 2025

Topics