Home

Donate

Assuring Inclusive AI: Reviewing Evidence as a Call to Inclusive AI Action

Susan Oman / Feb 26, 2025

This essay is part of a collection of reflections from participants in the Participatory AI Research & Practice Symposium (PAIRS) that preceded the Paris AI Action Summit. Read more from the series here.

Jamillah Knowles & Reset.Tech Australia / Better Images of AI / People on phones (portrait) / CC-BY 4.0

Last week’s Paris Action AI summit concluded with a “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.” Its priorities of reducing digital divides and “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all” appeased some advocates for regulations that account for AI’s social impacts. In Europe, the first rules of the EU AI Act came into force on February 2, 2025, requiring AI “providers and deployers” to ensure staff understand the context in which AI systems are to be deployed and to consider the people and groups on whom the AI systems will be used. Introducing social impact as a compliance risk has far greater leverage than you should care. Yet current AI assurance frameworks are inadequate to ensure inclusive AI.

Just after the AI Act rules became applicable and just before the Paris Summit, the US federal government announced it is seeking “input from the public” to develop its AI Action Plan. Explicit mentions of social concerns are notably missing. and it is written in such a way as to diminish the concern of the “public;” rather, the government requests responses from “interested parties.” In essence, it is another public consultation that is not really interested in including voices from a true range of publics.

I lead an evidence review on the UK-based Public Voices in AI project. We have three target beneficiary groups:

  1. Publics, especially groups most negatively affected by, and underrepresented in, AI research, policy & development;
  2. Responsible AI researchers, developers, and decision-makers; and
  3. Stakeholders who need convincing of responsible AI.

Reviewing evidence enables a re-evaluation of what is known about how different publics are consulted and how this aids understanding of ‘persons or groups of persons’ affected. This informed a call to action I presented at the Participatory AI Research & Practice Symposium at the Paris AI Fringe on February 8, 2025.

Reviewing evidence

Evidence reviews are generally unremarkable research tools, often preliminary to identifying research gaps and/or refining aims and methods. The systematic review is considered the most robust and favored by decision-makers: keywords are used to identify publications in databases, which are then assessed against criteria, often following medical models of ‘gold standards’ and prioritizing representative samples or randomized control trials. A list of ‘good quality’ research is constructed, and headline findings synthesized. The rationale is exclusion: cleanse your sample of examples outside these standards – often small samples from participatory approaches published in low-ranking journals. This validates certain bodies of knowledge and invalidates others, often further marginalizing the voices of those already marginalized. This presents huge quality and assurance issues for evidence that could aid inclusive AI.

What if we co-opted the evidence review method to re-view evidence from across the AI field to improve our understanding of different publics? What if we used this knowledge to enable an evidence-based intervention in AI decision-making, development, and deployment? Could this knowledge expand on AI assurance to take AI action?

Many scholarly publications touch on people’s experiences of and feelings about AI. What we lack is an understanding of the differences across these studies: who is researching who, how, and why? Indeed, would we even consider all these publications in the same ‘field’ of research? So, should we reject those from outside our field? Or should we include them?

For evidence to inform inclusive AI, we need to appraise all studies that ask people about AI. Methods, frameworks, and theories are points of contention in approaches to public attitudes research. Similarly, the politics of participation remain a site of contestation in this field. These debates can be productive but are far from cohesive across deliberative democracy work. Most importantly, not all this research generates quality outputs or evidence that is readily useful to inform AI decision-makers, developers, or deployers. We also know that not all evidence in this space assures the inclusion of diverse or marginalized populations, and when it does, can we be sure that diverse voices are not misrepresented? Building knowledge of who is missing from the evidence and who is being misrepresented is crucial to leveraging useful evidence for inclusive AI action.

Call to action: AI assurance as inclusion

AI assurance is a regulation mechanism, enabling organizations to “measure whether systems are trustworthy and demonstrate this to government, regulators, and the market.” The EU AI Act demands that assurance mechanisms include the diverse voices of those most impacted by an AI system. Evidence from a forthcoming Public Voices in AI survey of AI researchers suggests that most AI experts want AI to reflect human values but that they don’t consume or consult social science research to understand what that looks like. This issue needs addressing.

Through the process of reviewing evidence, we can demonstrate the value of public voices to AI assurance and push for inclusive AI Action. I propose the following:

  1. All voices, especially those most affected by AI systems, appear in evidence that is readily usable by those working in AI tech/policy.
  2. Special care is given to ensure voices are not misrepresented or missing.
  3. AI assurance measures demand AI literacy among developers, deployers and decision-makers that meaningfully includes diverse public voice perspectives.

Reviewing evidence is necessary for AI literacy to ensure the meaningful inclusion of people’s perspectives and discredit those that misrepresent them. At the London AI Fringe, Dinan Alasad from Faculty AI asserted that we must reframe inclusion as a necessary technical solution to dysfunctional systems instead of a ‘favor’ bestowed upon marginalized groups. The AI Act introduces a legal risk dimension. Being armed with impactful evidence of quality and inclusion assurance is our best route to ethical and inclusive AI practice that AI decision-makers, developers, and deployers will adopt.

Authors

Susan Oman
Dr Susan Oman is a Senior Lecturer in Data, AI, and Society and AI & In/equality Lead for the Centre for Machine Intelligence (CMI) at The University of Sheffield. Susan’s research investigates how data, AI, evidence, policy and practice work for the publics they claim to serve. She often does this ...

Related

Participatory AI: Forging Shared Frameworks for Action

Topics