Home

Donate

Civil Society: The Necessary Counterpower in AI Governance

Constance Bommelaer de Leusse, Pierre Noro / Feb 24, 2025

This essay is part of a collection of reflections from participants in the Participatory AI Research & Practice Symposium (PAIRS) that preceded the Paris AI Action Summit. Read more from the series here.

Auguste Couder, Opening session of the General Assembly, 5 May 1789. Wikimedia

The 2025 Paris AI Action Summit offered an opportunity for a historical reference. Taking stock of its kingdom’s plural crises, in 1789, Louis XVI summoned the General Estate, gathering delegates from the clergy, the gentry, and the commoners. Although this “Third Estate” represented at least 97% of France’s population, each “order” would get one vote, leaving its delegates with very few prospects to pass substantial reforms that would constrain the strongly aligned clergy and gentry.

Pledging to remain united and engaged with the delegates of the other orders, the Third Estate achieved institutional progress, such as the adoption of the Declaration of the Rights of Man and of the Citizen, recognizing the equality of all citizens, and the first French constitution.

Why does empowering citizens to take part in AI governance matter?

Prior to and during the Summit, civil society and academia have continued to lead the charge to democratize governance processes. A coalition, including the Sciences Po Tech & Global Affairs Innovation Hub and the AI & Society Institute (ENS), organized two public consultations, receiving more than 11,000 contributions from citizens around the world to feed the discussions of the Summit. Sciences Po’s Tech & Global Affairs Innovation Hub then welcomed nearly 300 academics, civil society experts, and activists for the open Participatory Artificial Intelligence Research Symposium to present use cases and share good practices on participative AI development, research, and evaluation.

These efforts were not just set to enhance the inclusivity of the Summit and the legitimacy of its outcomes. PAIRS was organized to provide civil society stakeholders the scaffolding both to counterbalance the risks of AI to democracy and human rights (as identified, for instance, by Gina Neff) and to steward responsible, trustworthy, and equitable AI systems for all.

This seems especially necessary as intensifying competition could spin into a race to the bottom, with States tempted to align their strategies with AI companies’ interests, rendering multilateral AI governance near impossible at the expense of the public good.

State sovereignty at risk and the AI governance dilemma

A recent paper published by the Hub with Wendy Hall and Kieron O’Hara explores the differences between the evolution of Internet governance and emerging AI management strategies. While the Internet was developed over a long period, relying mainly on public infrastructure and with a model promoting openness and interoperability, the recent burst of AI technologies, although still boosted by public research, is driven by private companies investing heavily in their own private infrastructures to train and deploy closed or just partially open models that rack up spectacular adoption.

The pace of innovation and the massive troves of capital at the hands of Big Tech companies and AI startups make it difficult for public institutions to attract the talent required to aggregate expertise, develop AI-based public systems, or elaborate effective regulation. This dependency on private actors—whether consultants or AI companies themselves—at every step of the policy cycle represents a risk regarding the sovereignty of States when it comes to governing AI.

This asymmetry of resources is amplified by a context in which most States are scrambling for levers to stimulate growth in a lackluster economic landscape. With looming planetary boundaries constraining economic activities and little room for maneuver regarding public spending, elected officials may find it difficult to resist AI companies promising efficiency gains across industries (including public services and security) on the condition that regulation is limited and does not stifle innovation.

While many States and their populations would benefit from national and multilateral norms to ensure responsible, fair, and equitable AI development and deployment, AI governance repeatedly shows symptoms of a social dilemma. Despite the Paris Summit’s focus on AI’s immediate impacts, world leaders centering their speeches on promoting their national innovation ecosystems and the US and UK refusing to sign the final declaration of the AI Action Summit are evidence of a lack of alignment.

Civil society as a counterpower and governance enabler

In game theory, when an equilibrium results in a negative collective outcome, one way to move it is to include other players. Speakers at PAIRS showcased how civil society participation in AI governance and development is not just about ensuring legitimacy and acceptability. It is critical to unlock this policy dialog, to anchor algorithmic systems in the real needs and aspirations of populations, and to design and audit systems that are fair and dignifying for their beneficiaries.

The stakeholders gathered at the margin of the Summit outlined ways to move beyond the “collective action problem” to ensure AI systems protect—hopefully even nurture—the commons on which our societies are founded. The use cases presented in Paris and in this series of posts in Tech Policy Press demonstrate that civil society is empowering citizens and public servants to act as “checks and balances,” to have a voice in policy design and to restore justice against biases and harmful deployments.

Critical next steps to amplify these voices include:

  1. By creating standardized but adaptable methodologies to foster comparable results and facilitate collective action, including across borders.
  2. By building strong strategic alliances between them and regulators looking to break through the social dilemma and elaborate AI strategies rooted in the citizens’ interests.
  3. By identifying sustainable resources to safeguard their independence and ability to act as counterpowers.

Just as the Third Estate revolutionized France's governance in 1789, a strong civil society counterpower—properly resourced, methodologically aligned, and strategically connected to pro-regulation policymakers—might be our best hope for ensuring AI serves the interests of the Global Majority rather than concentrating power in the hands of a few. Building a cohesive international network presents its own challenges: it could rely on ad hoc coalitions, at the risk of fragmentation, unless it ends up leaning on existing international organizations, whose legitimacy–also undermined by this race to the bottom–could be bolstered by new bottom-up alliances.

Authors

Constance Bommelaer de Leusse
Constance Bommelaer de Leusse has more than 20 years of experience in digital policy, technology, research, and education. She currently serves as the Senior Advisor of the AI & Society Institute (ENS-PSL) and of the Tech Hub of the Paris School of International Affairs (university of Sciences Po). ...
Pierre Noro
Pierre Noro is a lecturer at SciencesPo Paris and Université Paris-Cité, and an Advisor to the Paris School of International Affairs Tech & Global Affairs Hub. His work is dedicated to decentralized governance, digital ethics and the social and environmental impacts of digital technologies. After wo...

Related

AI Countergovernance: Lessons Learned from Canada and Paris

Topics