Beyond the Façade: Challenging and Evaluating the Meaning of Participation in AI Governance
Jonathan van Geuns / Feb 19, 2025This essay is part of a collection of reflections from participants in the Participatory AI Research & Practice Symposium (PAIRS) that preceded the Paris AI Action Summit. Read more from the series here.

The Paris AI Action Summit at the Grand Palais, February 10-11, 2025. Source
Participation in AI governance today is a grand performance, an ornate Parisian theater where the actors—corporate executives, policymakers, and technocrats—speak in the hushed, measured tones of responsibility, conjuring the illusion of collective decision-making. Step inside, and you find the machinery of governance untouched by the voices it claims to include. Companies, NGOs, and governments have learned that performative engagement can be a highly effective tool. AI ethics boards, stakeholder roundtables, public fora; these are not building blocks of democracy but the aestheticized mannequins of “participation-washing,” where inclusion is simulated and dissent is metabolized into polite irrelevance. They create the illusion of democratic oversight while retaining complete control over outcomes. This is not governance; it is plain window dressing.
For audits and assessments to truly shift power, they must first extricate themselves from the architecture of obscurity in which they are currently ensnared. Right now, these processes serve as mechanisms of absorption rather than transformation: they collect critique, distill it into non-threatening reports, and return to the public sphere as evidence that something, anything, is done. They operate like exhibits of engagement, inviting spectators to admire the intricate formalities of AI governance while ensuring that no fundamental structures are moved.
The problem is not merely that participation lacks teeth. It is that the ecosystem has been designed to neutralize its bite, inviting scrutiny and then unilaterally deciding what scrutiny means. They fund advisory councils, then retain discretion over whether to heed their recommendations. They convene panels of ethicists, then disband them when scrutiny becomes inconvenient. This is not a failure of participation; it is its co-optation. The very structures that should be contesting control are instead deployed as a buffer against accountability.
This is compounded by the fact that we lack structured ways to evaluate whether participation in AI governance is meaningful. We measure how many ‘stakeholders’ were consulted, but not whether their input had a tangible impact. We count how many fora were held, but not whether they resulted in substantive policy changes. The focus on process rather than outcome allows participation-washing to thrive. A rigorous framework must address key dimensions: such as inclusivity, power redistribution, trust-building, impact, and sustainability. Metrics must shift from procedural to structural. Instead of asking whether diverse stakeholders were invited, we must ask: Did participation redistribute power? Did it shift decision-making authority? Did it result in material changes?
Participation cannot be a consultative exercise in which affected communities are permitted to voice their concerns but are granted no authority to act upon them. It must be constitutive, not just on the margins of decision-making, but its very terms. This means participation must be structurally embedded, not as an advisory function but as a sovereign one. This could take the form of legally mandated public oversight boards with veto power over applications, regulatory bodies where representatives hold decision-making authority, and mechanisms that allow affected groups to directly challenge and contest AI-driven decisions or demand the withdrawal of systems that entrench injustice.
It is not enough to audit AI systems; we must be able to dismantle them. The frameworks that claim to oversee AI must contain mechanisms of refusal, not just oversight. When an audit exposes an algorithm as structurally racist, it must have the legal force to decommission it. When a system disproportionately harms workers, tenants, refugees, or citizens, those communities must not only be consulted but empowered to halt its deployment. Participation must not just document harm; it must interrupt it. Power concedes nothing to critique alone.
AI governance today is not merely a site of contestation but of outright corporate capture. Tech firms operate as private legislators, designing the ethical frameworks that govern their own conduct. They speak of openness while hoarding proprietary algorithms, of fairness while patenting bias, of safety while exporting surveillance infrastructure to authoritarian regimes. Against this backdrop, any framework that does not address the fundamental asymmetry of power in AI development and deployment is not just ineffective; it is complicit. Therefore, participatory audits and assessments must be legally binding and designed not just to diversify input but to redistribute control and reconfigure power.
Participation also must be continuous and iterative; systems evolve, as do their societal impacts, while risks and harms emerge long after. The model of a one-time consultation before deployment is a grotesque absurdity. Governance must be ongoing, adaptive, and structurally resistant to the inertia of neglect, ensuring communities remain involved in decisions over the entire lifecycle of an AI system. This also means establishing long-term funding and legal mandates for participatory structures.
Ultimately, the crisis in AI governance is not just about flawed AI. It is about the corporate capture of governance processes. The challenge before us is not just to increase participation but to ensure that participation leads to genuine redistribution of power. Without robust evaluation frameworks, clear enforcement mechanisms, and a fundamental rethinking of who gets to define AI governance priorities, participation will remain an empty window display designed to signal transparency and accountability while protecting the very structures that sustain inequality.
Authors
