The G20’s Quiet Rebuttal to the AI Arms Race
Aleksei Turobov / Nov 12, 2025
The opening session of the G20 Summit at the Summit Hall of Bharat Mandapam Centre in Delhi on September 9, 2023. (Bangladeshi Press Information Department)
A dominant narrative surrounding the artificial intelligence debate has portrayed it as a technological arms race between Washington and Beijing — a zero-sum contest for dominance. But in the background, the G20 in nine years has quietly built up cooperative global governance. In November, its leaders will gather in South Africa, inheriting a foundation of work that shows the supposed AI race need not follow the logic of great-power competition.
At AIxGEO, we conducted an analysis of 71 official G20 documents — from China's 2016 presidency through to Brazil's 2024 summit — that reveal that the language of competition, race and rivalry never appears in G20 AI governance contexts.
Instead, every presidency, spanning democracies, monarchies and authoritarian states, has maintained a cooperative framing, even amid a global pandemic, war and worsening geopolitical tensions. When Brazil’s summit concluded in 2024, documents acknowledged economic tensions while simultaneously committing to “leverage AI for good and for all.” Months earlier, India’s 2023 presidency noted the global economic impact of the war in Ukraine, yet pledged to “work together to promote international cooperation … on international governance for AI.”
The G20 has deliberately treated AI as a shared global project. Its nine-year framework challenges assumptions about the inevitability of an AI arms race and demonstrates how international institutions can create norm spaces where cooperation persists, even as the world fractures elsewhere. Each G20 presidency has strengthened a shared AI framework and built on its predecessors’ work. While acknowledging competitive dynamics in economic and security domains, and constructing cooperative norms in AI governance, the G20 has formed an active institutional effort of deliberately insulating AI governance from other rivalries.
How cooperation was built
China's 2016 presidency laid the foundations by positioning AI within a broader "New Industrial Revolution," framing progress as requiring "all countries to work together to maximize and quicken their positive effects while minimizing the potential negative impacts." This idea — of cooperation as necessity — became the template for every presidency since.
Subsequent presidencies added layers of expertise and domain-specific focus. Germany highlighted financial services applications, Argentina education and Japan in 2019 formalized the "human-centered approach to AI" through the "non-binding G20 AI Principles," drawn from the Organisation for Economic Co-operation and Development (OECD). Japan’s framing— that AI can “help promote inclusive economic growth” while presenting “societal challenges” — became the standard across G20 documents.
Then came the stress tests. Saudi Arabia's 2020 pandemic presidency, rather than retreating, reaffirmed commitments while noting the role of digital technologies and connectivity in strengthening their response to the pandemic. The crisis validated cooperation. Italy’s 2021 presidency embedded governance into labor rights, introducing accountability, privacy, fairness and transparency standards, flagging AI as an enabler of "greener economy," while acknowledging its environmental trade-offs.
Sectoral diversity remained at the heart of the framework. Indonesia's 2022 presidency highlighted tourism AI chatbots that are able to switch to disaster response and AI’s roles from "smart tourism” to countering corruption. India's 2023 positioning of AI for "agile, efficient and evidence-based decision-making", while recognizing it as "fundamentally redefining teaching and learning," showed meta-governance ambitions. Brazil’s 2024 presidency highlighted algorithmic bias in hiring and promotions and committed to “harness the benefits of safe, secure, and trustworthy Artificial Intelligence” through governance rather than assuming benefits occur automatically.
Each presidency built on the contributions of the last, forming a deliberately insulated AI governance norm that operates outside of economic and security rivalries.
Sophistication vs. operational reality
The framework’s sophistication lies in its domain-specific approach.
Rather than relying on universal principles, it addresses how “safe, secure, and trustworthy” AI takes different forms across domains — from agricultural productivity tools and emergency-response systems to anti-corruption auditing and evidence-based policymaking. The sectoral applications emerging across these presidencies reveal a sophistication often overlooked in debates that focus solely on general AI principles.
The G20's approach recognizes that AI in agriculture faces fundamentally different challenges to AI in healthcare. It shows that G20 governance thinking moves beyond broad statements and toward operational realities. Our previous research has found that this aligns with other international institutions: the United Nations, World Trade Organization, North Atlantic Treaty Organization(NATO) and OECD AI policies similarly recognize that domain-specific strategies are essential for meaningful AI governance.
At the same time, the framework has limits. While documents address algorithmic accountability, worker protection and educational access, questions of economic architecture - ownership concentration, profit distribution, and market power in AI systems — remain largely unexamined. The cooperative norm space governs societal integration but leaves economic structures largely untouched. But even if on-the-ground technical collaboration is limited, the narrative construction present in these documents creates a political infrastructure that constrains their own future policy opportunities, establishes reference points for regulatory alignment and builds institutional memory that can be activated when tensions ease.
Next up: South Africa
South Africa's upcoming November 22 summit inherits this foundation at a critical moment.
The cooperative narrative and accumulated framework create both pressure and opportunity: it could move AI governance from principles toward practice, such as through pilot programs, regulatory alignment mechanisms and shared evaluation frameworks.
Our analysis shows a sustained, cross-presidency effort to develop alternatives to technological nationalism has been actively built, documented and reinforced. The key question is whether this foundation will translate into implementation amid rising geopolitical tensions and a current administration in the United States that has broken with previous multilateral approaches. Will the G20’s record of cooperation continue to shape policy, or will the carefully constructed norm space erode because too few believed it mattered? Nearly a decade of policymaking shows that the future of AI governance doesn’t have to mirror great-power competition, but can model something better, if leaders choose to sustain it.
(This G20 analysis is part of the AIxGEO project, which examines international approaches to AI governance, and will be included in our analysis of 'non-Western' international AI policy approaches to be published later this year. Our previous study, “Moving beyond competition: domain-specific approach for international AI framework,” explored the policy debate in Western institutions.)
Authors
