Home

Donate

Meeting the Paris Summit Goal for ‘More Inclusive’ AI Means Getting Stakeholder Engagement Right

Tina Park / Feb 27, 2025

This essay is part of a collection of reflections from participants in the Participatory AI Research & Practice Symposium (PAIRS) that preceded the Paris AI Action Summit. Read more from the series here.

PARIS — On February 10-11, 2025, France hosted the Paris AI Action Summit at the Grand Palais. Source

One of the most resonant messages from 2025’s Paris AI Action Summit was the call to loosen the reins of regulation and increase public and private-sector investments to permit great technological innovation. Countries are in a geopolitical race towards AI hegemony by advancing AI innovation through large investments and different AI governance policies. Technology companies seek to build more advanced systems that reach global audiences and dominate markets. And despite a shared, signed statement for more inclusive and sustainable AI systems, how this is to be achieved in partnership with the very people whose lives are impacted and shaped by AI remains ambiguous.

We cannot allow “stakeholder engagement” to remain so loosely defined and contested. The reality of our current circumstances is that commercial entities developing and deploying AI technology, particularly “Big Tech” companies, exercise tremendous autonomy in how AI is developed, including who gets to be involved in that process.

While there is a growing body of regulatory efforts to govern its use and governments and civil society work to define what “public interest AI” may entail, the private sector lacks the kind of consistent incentives or guardrails in place to work with transparency and accountability towards the general public. Stakeholder bodies—such as civil society organizations, marginalized communities, labor groups, and consumers—are finding it necessary to respond to and engage with corporate entities and companies directly to identify and mitigate potential risks and harms of AI technology to ensure some form of public governance, but without many clear pathways to do so. So, regardless of our domain or sector, we have an incentive to put technology companies on track when it comes to stakeholder engagement.

Corporate AI developers are not ignorant of external stakeholders' important role in developing marketable technology that works as intended and minimizes unforeseen harms and risks. However, the incentives and risks for integrating more external stakeholder engagement in the work of private-sector AI companies are different from those for public-sector organizations, like not-for-profit organizations or government agencies. For example, competing in open markets means that private sector companies and organizations are trying to build good systems and products and beat their competitors to market as quickly as possible. Recommendations need to take these different incentives and risks into account for the guidance to be practical and implementable.

This is especially important because inclusive and equitable stakeholder engagement is difficult—regardless of the domain, sector, or audience—and requires consistent steps to advance this greater aim, even if mistakes are made along the way.

Anyone who has done stakeholder engagement understands the complexity of working with a set of stakeholders or the broader public and the limitations of what can be resolved through engagement. Even if established with the best intentions, the interaction between those seeking input and engagement can be harmful. History has shown us that some forms of stakeholder participation can harm marginalized communities by exploiting their intellectual labor. Or, participants might experience their time and efforts as “wasted” because their input is not integrated into the final output.

The diversity of stakeholders also makes it challenging to translate insights gained into actions taken. People come in with an intersection of different identities and experiences. Their demands, concerns, and insights may converge, but they are also just as likely to be in conflict with what other stakeholders are saying and prioritizing. Parsing through the seemingly conflicting input and deciding between stakeholders is an extremely difficult aspect of working with external stakeholders and the public, especially because, ultimately, it may seem like certain voices or insights have to be prioritized over others.

Ultimately, stakeholder engagement cannot be defined as a binary, “Did you talk to someone from outside of the company or not?” The degree to which stakeholder participation is harmful or empowering depends on how stakeholders are treated, how power relations between the organizer and the stakeholder are maintained or disrupted, and what is produced through the engagement (output or outcome).

We need a broader infrastructure to ensure that the most empowering version is achieved. An infrastructure that does not require unearned trust as a prerequisite but addresses any existing mistrust directly to allow everyone to remain committed to potentially uncomfortable processes. This includes transparent decision-making processes supported by systems that enable the public to hold the decision-makers accountable for their commitments. Information must be communicated simply but thoroughly across different languages and mediums to ensure that literacy, language, ability, and physical accessibility are not barriers to participation.

Stakeholder participation in and of itself is not intrinsically ‘good,’ and it cannot rectify inherently harmful products and systems, like AI systems designed for enhancing military warfare. However, the limitations of stakeholder engagement cannot prevent us from better defining what we as a field must do to ensure the involvement of socially excluded communities in important decisions. Absent the involvement of social equity-centering advocates, stakeholder engagement can become a means of ethics-washing, providing a veneer of social good and public benefit to a technology that ultimately causes physical and social harm.

Authors

Tina Park
Dr. Tina M. Park is the Head of Inclusive Research & Design at the Partnership on AI. She focuses on working with impacted communities on equity-driven research frameworks and methodologies to support the responsible development of AI and machine learning technologies. Building on PAI’s Methods for ...

Related

AI Countergovernance: Lessons Learned from Canada and Paris

Topics