Home

Donate

A Call for Modular Multistakeholder AI Governance: Practical Recommendations for the Upcoming AI Action Summit

Chris Riley, Constance Bommelaer de Leusse / Jan 13, 2025

Chris Riley is the executive director of the Data Transfer Initiative, and Constance de Leusse is a senior advisor at the AI & Society Institute (ENS-PSL) and PSIA Tech Hub (Sciences Po).

French President Emmanuel Macron, photographed here in March 2024, will host the AI Action Summit at the Grand Palais in Paris on February 10-11, 2025.

This February, global leaders will convene in Paris for the “AI Action Summit,” a critical opportunity to shape the trajectory of artificial intelligence governance. However, this gathering will take place amidst increasing strain on international cooperation in technology governance. To ensure its success, the Summit must avoid merely affirming high-level principles and instead deliver actionable, inclusive frameworks that drive sustainable progress.

The structure of this Summit suggests potential for meaningful impact, which is positive. The focus areas—on the future of work, safety and security, AI and the common good, global governance, and innovation ecosystem—are aptly chosen. While none of these challenges are simple, all are vital to shaping AI’s global impact. Drawing inspiration from the multistakeholder approach, three key recommendations stand out for the Summit and subsequent efforts. Leaders should:

  1. Adopt modular multistakeholder governance for AI;
  2. Prioritize openness through portability and interoperability;
  3. Involve citizens, academics, and the technical community in AI governance.

Modular Multistakeholder Governance: A Pragmatic Path Forward

The multistakeholder approach has proven effective in internet governance, balancing diverse interests while fostering practical collaboration. Similarly, modular governance can provide a blueprint for AI. By enabling bottom-up coordination through flexible modules—specialized, inclusive bodies that address specific challenges—AI governance can accommodate national sovereignty, legal diversity, and collective action.

This approach moves beyond symbolic consultation, empowering stakeholders from governments, civil society, academia, and the private sector to take meaningful roles. For instance, a modular framework for AI could include:

  • Shared Assessment Standards: Developing international benchmarks for evaluating AI systems. These standards, akin to the role of the IFRS Foundation in financial reporting, would enable countries to customize regulations while aligning on core principles.
  • Multistakeholder Compliance Modules: Creating global bodies to vet AI systems for compliance with ethical and safety standards, before products hit the market, which would ease the burden on individual regulators and ensure transnational coherence.

Importantly, modular governance complements rather than replaces national authority. Regulators retain the power to enforce their own laws in any context at any time but benefit from the efficiency, consistency, and reduced cost of enforcement offered by modular collaboration.

While the contexts are different and modules do not have inherent sovereignty, this approach mirrors in some ways the EU's subsidiarity principle, which ensures that decisions are taken at the most appropriate level, allowing national and local authorities to act independently where they are best suited to address specific issues, while reserving collective or centralized action for matters that transcend borders and require unified coordination. Similarly, modular governance respects national authority while fostering efficiency and coherence in addressing shared challenges through collaborative, multistakeholder mechanisms.

Promoting Openness: The Key to Sustainable Innovation

As markets for AI technologies evolve, there is a real risk of concentration and ossification. Without proactive measures, dominant players could stifle competition, making accountability harder to enforce and undermining innovation. To counteract this, the Summit should prioritize an open AI ecosystem through policies that emphasize data portability and interoperability.

These principles ensure users and businesses can move freely between AI platforms, fostering healthy competition and empowering stakeholders across the value chain. Practical measures include:

  • User Data Portability: Allowing individuals to transfer their AI interaction histories seamlessly between platforms, ensuring user autonomy and reducing lock-in.
  • Third-Party Interoperability: Encouraging standards that enable businesses to integrate and switch between AI providers, promoting innovation downstream.

Unlike traditional antitrust measures, which are often slow and jurisdictionally constrained, openness provides a scalable solution that diversifies AI benefits without undermining investment. By fostering competition through interoperability, governments can align market incentives with regulatory goals, ensuring that innovation remains accessible and responsible.

Involving all stakeholders in AI governance

In the spirit of reinforcing the multistakeholder dimension of the AI Action Summit and to widely involve citizens and experts, including professionals from civil society and academia, Sciences Po, The Future Society, CNNum, the AI & Society Institute (ENS-PSL), and Make.org launched two consultation processes. Recommendations stemming from the consultation emphasize the importance of a modular and participatory approach to AI governance. They underscored a unified call for governance that aligns AI development with public interest and ethical considerations.

  • Citizens emphasized the need for robust transparency measures, such as ensuring clear identification of AI-generated content and safeguarding personal data.
  • They also highlighted the importance of integrating AI education into curricula and increasing public accessibility to AI resources, fostering a society that is both informed and empowered.
  • Experts echoed these sentiments, advocating for actionable frameworks such as a Global Charter for Public Interest AI and the establishment of international AI auditing standards.
  • Both groups recognized the dual nature of AI: a tool with immense potential to address societal challenges—such as improving healthcare diagnostics and managing environmental crises—while posing risks that demand proactive, inclusive regulation.

These findings reinforce the need for the AI Action Summit to deliver practical, modular, and multistakeholder governance models that prioritize equity, sustainability, and global cooperation.

Conclusion

To achieve these outcomes, the Summit must do more than articulate shared principles; it must invest in practical, bottom-up initiatives. Just as the multistakeholder model thrives on inclusivity and collaborative problem-solving, so too must AI governance. The AI Action Summit has the potential to mark a turning point in AI governance by embracing modular governance and openness. These approaches, rooted in the proven successes of multistakeholder collaboration, offer a path to ensure AI develops in alignment with global public interests.

With clear commitments to inclusivity, practicality, and interoperability, the Summit can catalyze a governance model that addresses today’s challenges and builds a foundation for tomorrow’s opportunities. By taking action now, we can create an AI future that is innovative, equitable, and truly global.

Authors

Chris Riley
Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvania’s Annenberg Public Policy Center. Previously, he was a senior fellow for internet governance at the R Street Institute. He has worked on tech policy in D.C. and San...
Constance Bommelaer de Leusse
Constance Bommelaer de Leusse has more than 20 years of experience in digital policy, technology, research, and education. She currently serves as the Senior Advisor of the AI & Society Institute (ENS-PSL) and of the Tech Hub of the Paris School of International Affairs (university of Sciences Po). ...

Related

Navigating Trump's AI Strategy: A Roadmap for International AI Safety Institutes

Topics