Multistakeholder Promises and Power Gaps in Global AI Summits
Elonnai Hickok, Shashank Mohan, Jason Pielemeier, Jhalak M. Kakkar / Mar 17, 2026
India's Prime Minister Narendra Modi, seventh left, poses for photographs with chief executive officers of various AI groups during the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (Indian Prime Minister's Office via AP)
Global AI summits have increasingly embraced the language of multistakeholder governance, but meaningful participation by civil society and academic actors remains limited. Across the series of AI summits — from Bletchley and Seoul to Paris and New Delhi — governments have gradually expanded opportunities for engagement with researchers, civil society organizations, and other stakeholders. These efforts have helped bring issues such as democratization, sovereignty, equity, and inclusivity into global AI governance discussions. Yet the ability of non-state and non-corporate actors to shape agendas and outcomes has remained constrained. These summits are often in the nature of trade shows showcasing industrial prowess rather than forums for substantive governance conversations.
The New Delhi AI Impact Summit Declaration, signed by more than 90 countries, including China and the United States, continued this trajectory by formally recognizing international cooperation and multistakeholderism. The 2026 India AI Impact Summit also created additional avenues for participation, particularly for civil society groups, researchers, and academics from the Majority World. However, the inclusion of these themes in summit agendas and declarations has not yet translated into meaningful influence over decision-making.
If global AI governance is to address real-world impacts — both positive and negative — the architecture and institutional processes of these Summits must evolve. Multistakeholder participation for civil society groups and academic actors should move beyond representation toward active involvement in agenda-setting and decision-making. This is especially true for those from the Global Majority, who face additional barriers to participation and power. To allow for truly meaningful, multistakeholder governance, the path to the upcoming UN Global Dialogue on AI Governance and the 2027 Global AI Summit in Geneva must be open, inclusive, and rights-focused, and prioritize a bottom-up civil society agenda.
Thematic priorities of AI Summits: From the UK to India and beyond
The previous three Summits in Bletchley, Seoul, and Paris focused on frontier risks, safety research, building safety networks, and advancing public interest AI. As the first summit hosted in a Global Majority country, the India AI Impact Summit shifted the global discourse to prioritize perspectives and needs of the Global Majority, integrating diverse perspectives into the official program.
While the New Delhi AI Impact Declaration is widely welcomed for its focus on sovereignty, democratizing access to AI resources and infrastructure, supporting locally relevant innovation, and strengthening resilient AI ecosystems, it falls short of addressing human rights or establishing mechanisms to track the implementation of voluntary commitments. Democracy and sovereignty are words increasingly used by the White House and US AI companies to position their products and services as more rights-respecting and competitive than their Chinese alternatives. Without targeted autonomy, the Global Majority runs the risk of renting Big Tech’s models.
Although multiple initiatives announced at the India Impact Summit are still in their early stages of implementation and are, again, voluntary and non-binding, it is noteworthy that multistakeholder collaboration is emphasized across multiple deliverables. In practice, the commitments could create a framework for cooperation rooted in inclusive, democratized AI. However, with the next summit being hosted by Switzerland, it is necessary to ensure that the thematic focus on the needs and issues of the Global Majority remains part of the core agenda of global AI governance. The answer lies in the architecture of the Summits.
Evaluating the summit architecture
To date, the architecture and organizational processes of these Summits have lacked coherence, offering civil society and academic stakeholders only a limited role in shaping agendas and outcomes. This inconsistency is evident across the Summit series: the Bletchley Summit had limited civil society participation; the Paris Summit utilized a multistakeholder steering committee and working groups with variable impact and restricted access to the Main Summit.
Ahead of the Delhi Summit, the Government of India established working groups (the outcomes of which are available on the IndiaAI website), expert engagement groups, and accredited pre-events organized and hosted by a range of stakeholders. Civil society sessions were accredited and integrated into the main Summit agenda and as satellite events, including our multistakeholder side event Reinforcements and Learning: Multistakeholder Convening on AI Governance, which convened over 400 leading academic, civil society, company, and government experts, and formed part of our broader MAP-AI project activities in New Delhi. The summit was also open to participation by the general public.
Yet, as with previous summits, the Delhi Summit’s broader participation did little to connect academic and civil society discussions to actual decision-making on the main agenda and outcomes. Space was definitely created for civil society participation at the India Summit, but the contours of inclusion weren’t negotiated; they were granted. For instance, on the day of the Prime Minister’s speech, with heads of state and senior executives in attendance, civil society was not permitted to attend.
Unfortunately, promises around Global Majority leadership did not translate into meaningful action, either. With the lasting image of the India AI Impact Summit being the Indian Prime Minister holding hands with mostly US Big Tech CEOs, who were all men.
Across all summits, a persistent disconnect remains between the government decision-making track and civil society groups or academic mechanisms. To shift these vital voices from the periphery to the core, future Summits must move beyond simple participation toward shared spaces and “mingled tracks” that bring together governments, companies, and civil society in deliberative spaces. True multistakeholder outcomes require an exchange of perspectives that transcends panel discussions, allowing for real, integrated input from government, industry, and academia alike.
To enable this, the following issues must be addressed:
Lack of a governing framework: The current summit model operates on an ad-hoc basis, lacking a comprehensive governing framework, a central secretariat, or a coordinating body. This institutional vacuum creates a gap in predictable “continuity” between host nations. For example, Switzerland was only confirmed as the 2027 host in New Delhi, and the United Arab Emirates has announced that it will co-host 2027 and host the 2028 Summit, but this has not been confirmed. Without a transparent process for selecting future hosts or defining a multi-year agenda, civil society is forced into a cycle of reactive engagement. This unpredictability hinders sustained, long-term engagement, both substantively, financially, and logistically.
An accountability deficit: In the absence of formal monitoring and evaluation mechanisms, the commitments made at each Summit risk becoming performative and influenced by the shifting sands of geopolitics and corporate interests. There is currently no accountability framework to track progress on deliverables from one Summit to the next. For stakeholders engaging in each Summit, this raises fundamental questions about the efficacy of their participation and demands.
Procedural ambiguity for multistakeholder participation: The procedural pathways for integrating non-governmental and non-corporate input into concrete outcomes remained undefined. There is a focus on enabling participation, but avenues for meaningful influence on agendas, process, and outcomes are often opaque and inconsistent, leading to last-minute engagement efforts. The lack of a standardized process for receiving and synthesizing multistakeholder contributions, and a roadmap for it, leads to inconsistent participation. Long-established Internet Governance processes around multistakeholder participation through the World Summit on the Information Society (WSIS), the Internet Governance Forum (IGF), and NetMundial and the Sao Paulo Principles offer important lessons and pathways for incorporating meaningful multistakeholder participation.
Limited engagement: While India has set a valuable precedent by focusing on the Global Majority, it is unclear whether that priority will be retained as the next few global AI governance gatherings shift back to the North. Both logistically and substantively, Majority World participation, especially from civil society and academia, might drop in Geneva without dedicated efforts. Meanwhile, while civil society has been provided more space at and around the last two Summits, a “participation paradox” persists: civil society is increasingly invited to speak at sessions and panels but remains largely excluded from agenda-setting and decision-making.
These systemic barriers reveal a disconnect between rhetoric and reality: while the language of a multistakeholder approach and an agenda of inclusivity, social empowerment, and access is adopted, the actual agency of underrepresented voices remains curtailed. This curtailment occurs at a critical juncture, where multistakeholder input is essential as AI technologies and their applications evolve at breakneck speed.
This rapid advancement, further complicated by deepening geopolitical uncertainty and the increasing use of AI in armed conflict, raises fundamental questions about acceptable use, liability for harm, the necessary guardrails and checks and balances, and ultimately who gets to define them. These questions demand rigorous deliberation by diverse voices, particularly those most directly impacted by the technology. Consequently, we must design inclusive multistakeholder processes that bring together a plurality of contexts, values, experiences, and expertise and can navigate both hyper-fast development cycles and new use cases, as well as a landscape of fractured global relationships.
Future course
As the center of gravity for AI governance shifts from New Delhi to Geneva, the international community faces a critical moment. The challenge now is to ensure that the AI governance agenda, discourse, and ultimately the frameworks that emerge, reflect and address the needs of and impact on the more than 80% of the world’s population that lives in the Global Majority. This must be achieved by centering the human rights framework, enabling continuity and accountability, and by governance frameworks that address real-world impacts by enhancing multistakeholder participation.
To facilitate this evolution, the Centre for Communication Governance at National Law University Delhi (CCG) and Global Network Initiative have developed a Reflections and Recommendations brief. This framework serves as a blueprint for ensuring that AI governance conversations and processes include diverse perspectives, particularly from the Global Majority, are grounded in real-world contexts and impacts, prioritize a bottom-up civil society agenda, and result in legitimate outcomes, focusing on nine priorities.
It is clear that building and sustaining meaningful participation will require more than simply funding travel and organizing events. Enabling meaningful multistakeholder participation requires elevating Global Majority priorities with contextual nuance, convening diverse stakeholders on equal footing across the AI ecosystem, and integrating rigorous policy analysis with deep technical understanding. To enhance the impact of civil society and the Global Majority on AI governance, there is a need to support community and coordination, civil society agenda-setting and substantive advocacy, and targeted research and peer learning.
The next two years offer a critical opportunity to emphasize and institutionalize the agency of underrepresented voices. Ultimately, the legitimacy of the global AI governance architecture will be measured by its ability to operationalize the rhetoric of multistakeholderism. This requires a shift from symbolic inclusion to transparent, accountable, bottom-up, rights-driven frameworks that give those most impacted by AI not only an equal seat at the table but also an opportunity to write the recipe.
Authors



