Participatory AI: Forging Shared Frameworks for Action
Tim Davies, Anna Colom, Lidia Velkova, Marta Poblet, Lydia Nobbs, Leda Kuneva / Feb 26, 2025This essay is part of a collection of reflections from participants in the Participatory AI Research & Practice Symposium (PAIRS) that preceded the Paris AI Action Summit. Read more from the series here.

Jamillah Knowles & We and AI / Better Images of AI / People and Ivory Tower AI 2 / CC-BY 4.0
The launch of the new $400 million Current AI fund at the Paris AI Action Summit has focused attention on shaping technology in the public interest. This begs the question of how public voices will shape this fund and, perhaps more importantly, how the public interest can be put at the center of broader AI development and governance decisions.
A growing number of projects across the globe have engaged diverse groups to explore attitudes to AI and visions for how it should be built and governed. But these often remain ad hoc and marginal. A sharper focus is needed to move beyond experiments and achieve embedded, democratic influence on AI.
Avoiding participation washing
One thing worse than not involving different publics in decisions on how data and AI systems are made is doing it badly. The Participatory AI Research & Practice Symposium, held in Paris before the official Paris AI Action Summit on February 10-11, 2025, gathered over 200 people from various sectors to discuss participatory approaches to AI. As authors of this piece, we explored complementary frameworks that can address how we build, buy, govern, and, when necessary, resist the unfolding impacts of these fast-moving technologies.
A lot has been written about the risks of depoliticizing participation and turning it into a tokenistic practice. Current framings of multistakeholderism and collaborative governance risk going in a similar direction. This is why zooming out is essential. Better frameworks can help us step back and be more considerate about how we approach participation for real-life impact. Who is involved in deciding what technologies we develop and adopt, and for what purposes? How, why, at what stage, and what are the consequences? How is participation linked to governance and our civil and political rights? And across all these questions, how is power being shared?
Targeting power
Firstly, there is a need to be specific about what we are governing, for what purpose, and at what scale. ‘AI’ in itself means very little if we do not refer to and govern the data underpinning it, the type of technology, model, and systems in use, the infrastructure and who it is owned by, and the ultimate intended use of particular AI tools. To be effective, participatory involvement in the development or governance of AI needs to have clear targets in terms of the specific models, standards, regulations, or deployments that public input could change. It also needs to go beyond articulating principles or priorities for change to explore ways that desired changes can be operationalized through specific, localized, or institutionalized agreements and action.
The AI Issue Association model, for example, requires a co-creative and co-owned focus on practical solutions within the context of well-framed, individual issues. Relatedly, the digital self-determination framework, drawing on relevant histories, postcolonial struggles, and cultural contexts, invites us to build on collective dialogue and move towards the formalization of outcomes and creation of mechanisms that hold power accountable for following agreements.
Tapping into power
Secondly, it is important to consider the governance instruments at hand and the models of democracy relevant in each context and scale, as argued for in the multidimensional framework developed by Anna Colom and Marta Poblet that draws on the creation of an EU Health Data Space as a case study for how to target, tap into and mobilize power. It encourages us to pay attention to the multiple dimensions and relationships between hard and soft law, institutions, values, publics, contexts, and practices. There is no linear path to higher levels of participation. We need strategic choices about the right points of influence to focus on at any one time.
As part of this, we must not neglect existing forms of democratic and participatory space, not least at the local level. The capacity of our existing representative democracy to deal with technological change can be enhanced by drawing on other models of democracy, like deliberative democracy. Citizens’ assemblies and juries with a focus on AI have a particular potential to be deployed in the decision-making organs of governments at all levels, as well as in oversight institutions and authorities. The Paris City Council, for example, has a permanent Citizens’ Assembly that can propose legislation, develop bills and monitor the public policies of the municipality, and this could involve policies about using AI in the city's public services.
We also need frameworks to evaluate the degree to which consequential decisions are made democratically rather than unilaterally (whether by firms, regulators, or AI systems). The Democracy Levels framework developed by Aviv Ovadya et al provides a common language for talking about how much decision-making power has been transferred from a unilateral authority to a democratic process, from lower levels where democratic processes merely inform the unilateral decision (this is the case for a few early firm-led experiments today), to higher levels whether the output of the democratic process is binding, or even where the democratic process self-governing (as yet unrealized in any large-scale context). The levels can be used as milestones in a roadmap for the democratic AI and participatory AI ecosystems to evaluate pilot projects and keep AI organizations accountable.
Mobilizing and rebalancing power
Thirdly, we need to recognize that participation comes from the bottom-up as well as from the top-down. Alongside strengthening institutional participation we also need to protect and promote the social, civil and political rights that enable civil society, social movements and different publics to claim and enact spaces for participation. Meaningful participation is about redistributing power in how decisions are made away from a small number of actors with inordinate economic and lobbying power.
AI systems are already causing real harms. They are being used for surveillance, to violate human rights, and to entrench inequalities, deepening the urgency to protect and defend people’s rights. Workers, consumer associations, and civil society organizations are often at the frontline of AI harms and need to be able to impact decision-making effectively. By drawing upon and coordinating sources of bottom-up power, collective action can target reputational and compliance levers to hold corporations and administrations accountable and push forward more representative democratic processes. Where regulation has not yet caught up or is not appropriate to address fast-moving developments, there is a space for a new kind of collaborative governance in the form of ‘Issue Associations.’ Ensuring multi-stakeholder representation and founded upon participatory approaches, such associations would focus on solving specific AI issues, co-creating solutions based on a multitude of perspectives instead of representing the interest of a single ‘most powerful’ group.
Conclusion
Calls for participatory AI are rooted in both ideals and pragmatism. Starting from commitments to values of democracy, human rights, and autonomy, calls for participation are a call to prevent these from being eroded by the current trajectory of concentrated power and unaccountable AI development. At the same time, a commitment to participation is a strategic move, seeking to align technical development with a robustly discerned public interest and to have mechanisms and mobilized publics that can hold the developers and governors of AI to account in delivering technologies and technical change that we freely choose and value.
Public involvement is needed at all levels and could foster trust in AI’s adoption and effective governance. But, in practice, it can’t be everywhere, all the time. That’s why we need clear processes and tools, shared approaches and language, to ensure that local efforts, global projects, insider initiatives, and outsider action connect and add up to more than the sum of their parts.
Authors





