Home

Donate

Deficits, Gaps, and Compromises in the UN's “Governing AI for Humanity”

Scott Timcke / Sep 26, 2024

Scott Timcke is a Senior Research Associate at Research ICT Africa.

The flag of the United Nations. Shutterstock

Last Thursday, the United Nations High-level Advisory Body on Artificial Intelligence released its final report, titled “Governing AI for Humanity.” The report outlines a global framework for AI governance, premised on the idea that the challenges and opportunities the technology presents “require a holistic, global approach cutting transversally across political, economic, social, ethical, human rights, technical, environmental and other domains.” But to understand the ambitions of the document and its implications, it is first necessary to consider more fundamental questions about fairness, cooperation, and the tensions that arise when trying to balance competing interests.

A Difficult Dilemma

In his book Equality and Partiality, American philosopher Thomas Nagel tackles a core dilemma in the formation of just societies. To him, political legitimacy requires honoring impartial treatment while concurrently reasonably respecting each person’s rights and interests. As these intuitions pull in different directions, in theory and practice, satisfying both may be a task we do not yet know how to accomplish. Nagel’s dilemma becomes especially perceptible when considering issues around social inequality, and inter alia, global inequality, matters which evidently weighed heavily on the minds of the experts in the UN’s High-level Advisory Body on Artificial Intelligence’s.

In elaborating upon Nagel’s work, Canadian philosopher G.A. Cohen writes that “in our unequal world the rich should sacrifice to help the poor. But how much should they give up? There is a level of sacrifice so modest that the rich could not reasonably refuse it, and a level so high that the poor could not reasonably demand it.”

When resistance to sacrifices and advocacy of demands – self-interested as they always are – are close, then there could be justifiable agreement about redistribution. Especially when these claims overlap, the demanded sacrifice could not be reasonably rejected as redistribution would not leave the rich too badly off, for example. At the same time, there are circumstances where people can be too demanding, where sacrifices sought are too great, where pressing for impartial treatment compromises reasonable rights and interests. What kind of equality-pursuing projects are possible when there is such a gap?

Global AI governance faces a version of this dilemma. Most recognize the importance of all countries in equally shaping the future of AI. Concurrently, each country has its own AI aspirations and concerns. Further, some countries may feel disadvantaged by strict regulations; others might argue that lax rules compromise global safety. The same rough set of concerns exist for the many firms for whom AI is part of their stock and trade.

We could probably agree that any legitimate global AI governance framework must consider impartiality and vested interest, although as Nagel suggests, is it rare that such agreements can be forged when inequalities are so persistent. More so when the stakes are nothing less than who effectively controls the near future.

Seeking to Produce Coherence

If navigating the rough rapids of egalitarianism is too much, perhaps there are other viable alternatives. In addition to offering an overview of the challenges and opportunities in global AI governance and its deficits, Governing AI for Humanity seeks to produce a coherent understanding to supersede the “patchwork of norms and institutions.” While this patchwork may have been a pragmatic start, the report notes that “none of them can be truly global in reach and comprehensive in coverage.” The Advisory Body well knows that an incoherent patchwork greatly favors those nations with early starters and powerful actors, as it allows these actors to navigate and promote their interests. In this respect, the main goal of the report is to produce coherence, a set of shared assumptions to guide conversations about the technical changes shaping the circulation of people, goods, services, money, information, and power.

The Advisory Body’s ethos is sensitive to issues around bias, surveillance, confabulations, information integrity, peace and security, and energy requirements, especially as these intersect with fast, opaque and autonomous technical systems. Its conceptualization of the global policy problem is that AI technological systems are transboundary, which in turn necessitates a coordinated global approach to AI governance. Furthermore, the final report maintains a focus on opportunities and enablers for AI development, as well as comprehensive risk assessments, carrying forward key elements from earlier drafts.

At its core, the final report builds a case for enhancing global cooperation in AI development and deployment. It proposes several initiatives to address existing gaps in representation, coordination, and implementation. These include the formation of a scientific panel, potentially evolving from the current advisory body, to provide ongoing expert guidance. The report also calls for policy dialogues aimed at establishing universal standards for AI operation and use.

Recognizing the importance of inclusive development, capacity building in the Global South is emphasized, linked to a global fund to boost talent acquisition and development in these regions. On the institutional front, the report recommends establishing an AI Office within the United Nations Secretariat, with the possibility of evolving into agencies if future circumstances necessitate. This proposal reflects the assessment that AI systems will play a constitutive role in global affairs and the need for dedicated international oversight.

Grappling with Consequences

While commendable, this approach does not yet fully persuade me. In my view, it redirects attention away from the materiality of AI. The full material foundation—including the location of data centers, the laws governing them, and who holds direct control rights—comprises important matters unlikely to be reconciled through appeals to shared commitments, as I have argued elsewhere.

Next, the existing patchwork of regulations certainly complicates matters. Even so, the Advisory Body could have been more forthright about which areas of AI development are ill-advised or should be discouraged. The decision to bracket off matters of weaponry is a significant error given the advancements seen on battlefields in the past decade.

While the report takes strides in addressing global AI governance, some critical issues appear to receive less emphasis. Concerns about power concentration in the AI sector and the widening technological gap between countries are not as prominently featured compared to the first draft. The prospect of implementing redistributive measures to address growing inequalities in AI development and deployment remains a distant consideration.

For governance paradigms seeking to curtail inequalities, it is essential that they comprehend the origins and primary drivers of economic injustices. This kind of understanding can form the basis for recognizing why and how AI's benefits are not equitably distributed. The report tiptoes around how capital is shaping the distribution and experiences of AI technologies. There is an absence of concerted efforts to direct investment towards the Global South, perpetuating existing global hierarchies. As a result, certain regions continue to drive technological change, while others merely grapple with its consequences.

Ultimately we return to the philosophers’ dilemma about fairness. These entrenched power imbalances in AI development and distribution remain inadequately addressed, highlighting the need to centrally foreground issues of economic injustice in global AI governance. Yet will this project be seen as unreasonable and too demanding? As Cohen cautions, ‘mind the gap.’

Authors

Scott Timcke
Scott Timcke is a Senior Research Associate at Research ICT Africa, an African think tank based in Cape Town, where he leads the Information Disorders and AI Risk & Cybersecurity Projects. His primary area of expertise is in democratic development policy, industrialization, and the role of the state...

Topics