Home

Donate

The Democratic Deficit in AI Humanitarian Systems: Why Community Participation Can't Wait

Marine Ragnet / Nov 13, 2024

Marine Ragnet leads the AI Peace project at NYU's Peace Research and Education Program (PREP).

November 3, 2024: Bidibidi Refugee Settlement, located in Yumbe District's West Nile sub-region in Uganda. Wikimedia Commons

In the rush to deploy artificial intelligence in humanitarian contexts, we're witnessing a concerning paradox: AI systems meant to support democratic peace-building are being implemented in fundamentally undemocratic ways. From refugee camps in Uganda to conflict zones in Yemen, AI technologies are reshaping humanitarian operations without meaningful input from the very communities they're meant to serve.

Consider the World Food Programme's SCOPE system in Uganda's Bidi Bidi refugee settlement, one of the world's largest refugee camps hosting over 270,000 South Sudanese refugees. While intended to improve aid distribution efficiency and reduce fraud, the biometric system effectively forced refugees to surrender their personal data to receive food assistance. Many reported feeling compelled to provide their biometric data to receive food aid, raising serious questions about coercion and consent in situations of extreme vulnerability.

But there are glimpses of a better approach. In Yemen, researchers developing AI systems for peace-building took a more participatory path. They used natural language processing models and machine learning systems for knowledge management, extraction, and monitoring of conflict developments. Throughout the process, researchers developed an approach to mitigating risks by triangulating findings from each model with other data sources rather than deriving conclusions from an AI system in isolation.

The contrast between these cases highlights three critical challenges we must address:

1. The Participation Problem

Current AI deployment in humanitarian contexts often follows a top-down model where systems are designed and implemented by external experts with minimal local input. In Uganda's SCOPE implementation, technical issues, including difficulties in capturing fingerprints from elderly individuals or those with worn hands from manual labor, led to some refugees being denied food assistance. These problems might have been identified earlier through community consultation.

2. The Knowledge Gap

Problems arise when mediators lack data literacy skills, a co-creation methodological framework for using these tools, and an understanding of the applications of AI systems. The system's operation was opaque to many beneficiaries, who often didn't understand how their data was being used or stored, creating a significant barrier to meaningful participation and consent.

3. The Power Imbalance

The global landscape of AI development itself reflects stark imbalances, with certain nation-states and tech corporations wielding significantly greater resources and expertise than others. This disparity translates into geopolitical asymmetries within humanitarian peacebuilding initiatives, where actors with greater access to AI capabilities hold undue influence over decision-making processes.

The solution isn't to abandon AI in humanitarian work – the technology's potential benefits are too significant to ignore. Instead, we need a fundamental shift toward democratic AI governance. This means:

  1. Mandatory community consultation periods before AI system deployment
  2. Local oversight committees with real power to modify or reject AI implementations
  3. Investment in AI literacy programs for affected populations
  4. Clear mechanisms for communities to challenge algorithmic decisions
  5. Regular public audits of AI systems' impacts on local power dynamics

The Yemen case offers a template for this approach. By treating AI as a tool to support human-led processes rather than a replacement for them and by prioritizing local participation, the project demonstrated how technology can enhance rather than erode democratic agency. Leveraging AI systems as a tool for mediation in peacebuilding, rather than a shortcut to ending violence and promoting reconciliation, allows actors to develop a granular analysis of complex, protracted conflicts and the coalitions participating in them.

Critics might argue that emergency humanitarian situations don't allow time for extensive community consultation. But this view misunderstands both democracy and effective humanitarian action. When AI systems fail to account for local contexts or create new forms of exclusion, they can actually harm the communities they're meant to help. Democratic participation isn't a luxury – it's a necessity for effective humanitarian intervention.

The stakes couldn't be higher. As AI systems become more prevalent in humanitarian contexts, the patterns of governance we establish today will shape power dynamics for years to come. Will we perpetuate a model where vulnerable populations must submit to algorithmic authority to receive aid? Or will we build democratic frameworks that ensure communities have a real voice in the technologies that affect their lives?

The choice is ours, but the time to act is now. Every AI system deployed without meaningful community participation further entrenches undemocratic practices in humanitarian spaces. By prioritizing democratic governance in humanitarian AI, we can ensure these powerful technologies serve as tools for empowerment rather than instruments of control.

The answer isn't less technology in humanitarian contexts – it's more democracy in how we govern it.

Related Reading:

Authors

Marine Ragnet
Marine Ragnet leads the AI Peace project at NYU's Peace Research and Education Program (PREP), bringing over a decade of cross-sector expertise to the intersection of technology and international affairs. Her career spans critical roles at the GovLab, the U.S. Department of State, and the United Nat...

Topics