Home

The Dangers of Imposing Global North Approaches to AI Governance on the Global South

Gordon LaForge / Sep 5, 2024

Gordon LaForge is one of the authors of Bridging the AI Governance Divide, a new policy paper from New America and the Igarapé Institute.

Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0

The world of artificial intelligence (AI) is heavily lopsided. One American firm – Nvidia, the world’s most valuable company – holds as much as 95% of the AI chip market. The $335 billion in private capital invested in American AI companies from 2013-2023 was three times more than in China, 11 times more than in the UK, and 30 times more than in India. And of the 109 most important machine learning models, 101 were made in the US, Western Europe, or China. Only two were made in a global South country (Egypt).

Though the optimist’s scenario is that AI will lift all boats, many observers expect the technology will exacerbate global inequalities in the near term. When the World Economic Forum surveyed more than 60 chief economists at the end of 2023, nearly all said AI would improve productivity in high-income countries in the next five years. Only half predicted AI would improve productivity in low-income countries, and six out of ten anticipated AI would widen the divide between the global North and South.

A similar imbalance exists in AI governance. The policies, standards, guidelines, and rules shaping the development and use of AI overwhelmingly originate in the rich world. A review prepared by New America and the Igarape Institute for the G20 found that out of nearly 500 AI policies, standards, and guidelines developed from 2011 through 2023, two-thirds originated in the US, Europe, or China, while only 7 percent came from Latin America and Africa.

This global disparity in AI rule-setting means that the technology’s path will trace the national, commercial, and social interests of wealthy nations, at times to the detriment of societies with less power and fewer resources in the global South. Without a greater say and more AI policymaking capacity, these populations are more likely to be exposed to AI risks and deprived of AI benefits. The consequences – labor displacement, political destabilization, widening economic inequality, and others – could drive conflicts and migration that will not stay neatly confined within national borders.

AI will be high on the agenda at several international gatherings this fall, including at the UN Summit of the Future, where member-states are expected to agree to the Global Digital Compact, and at the G20 Summit in Brazil. These confabs and other global initiatives should work to narrow the AI governance divide by focusing attention on the AI priorities that matter to developing countries and by taking concrete actions to strengthen AI development and policy making capacity in those nations.

The AI risk profile varies by geography. The extent of worker displacement by AI in the US or Europe is at this point still largely speculative. In the global South, it is already apparent. Countries with advanced AI industries worry primarily about misuse, such as weaponization, disinformation, or loss of human control. Developing countries worry more about missed use, the risk of forgoing the enormous economic and development potential of AI applications in domains such as agriculture, health, and education.

More attention should be paid to the priorities of the global South. So far, the most advanced global AI governance conversations are dominated by rich nations and, unsurprisingly, reflect their priorities and concerns. The UK AI Safety Summit, the G7 Hiroshima AI Process, industry and civil society calls to pause AI development or build an International Atomic Energy Agency equivalent – all of these focus on containing the development and use of AI, an approach that would cement the power of large rich-world incumbents and hamper the growth of nascent AI players in the developing world.

Similarly, national lawmakers and regulators across the world often draw inspiration from rich-world standard-setters, like the EU AI Act, China’s AI rules, or the US Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. But these policies may be ill-suited for developing countries. Ideas like model licensing regimes or AI monitoring bodies with onerous or invasive reporting requirements might suit Brussels or Beijing, but in Nairobi or Brasilia could introduce human rights risks, stifle the growth of AI startups, and slow the adoption of AI tools.

This is not to say that the world should ignore the risk of misuse or that existing AI governance efforts are misguided. It is simply that equal weight should be placed on the AI risks that matter to developing countries. At the least, those countries should have a prominent say in setting the agenda in the existing forums and processes that are global in scope, such as the UNGA and G20.

Better would be the creation of global initiatives expressly focused on the AI governance priorities of the majority world. One idea is an Equitable AI Development Forum, in which countries, industry, researchers, and philanthropists could come together to develop and strengthen AI policy and development ecosystems in the global South. Activities could include investing in shared infrastructure like datasets and compute for training and running AI models; designing regulatory sandboxes for safety testing; and incubating private-public-philanthropic partnerships that provide blended financing for local AI startups or pool AI resources in a manner similar to how Gavi, the Vaccine Alliance procures and distributes vaccines worldwide.

Such a forum could facilitate learning exchanges and training programs aimed at strengthening AI literacy and policymaking within governments. AI is less like social media and more like electricity, a foundational technology that will affect every sector. Whether their portfolio is national defense, healthcare, labor, education, or anything else, civil servants and lawmakers across government will need to be versed in AI to develop and implement sensible and responsible AI regulations and policies that fit local context.

The G20, which represents more than 80 percent of global GDP and convenes in Rio de Janeiro in November, could be a good place to trigger the creation of such a forum or at least advance some of its priorities. In the wake of the 2008 financial crisis, G20 finance ministers gathered to figure out how to mitigate risk in the global financial system. Today, ministers responsible for digital economy issues could gather in a similar way to discuss how to build global cooperation that can strengthen safe and responsible AI policy globally.

Given the vast disparities in AI power between the rich world and developing countries, closing the AI governance divide will be no easy task. But at stake is whether AI just helps the rich get richer or makes the whole world better off.

Authors

Gordon LaForge
Gordon LaForge is a senior policy analyst with New America, a think tank, where he researches and writes on the geopolitics and governance of emerging technologies and other issues in global politics. He is also visiting faculty at Stanford University’s Leadership Academy for Development and the ASU...

Topics