Home

Donate

What Do We Mean When We Say “Artificial General Intelligence?”

Prithvi Iyer / Feb 13, 2024

Fritzchens Fritz / Better Images of AI / GPU shot etched 5 / CC-BY 4.0

In a recent interview with Verge deputy editor Alex Heath, Meta CEO Mark Zuckerberg said the company’s chief ambition is to build an Artificial General Intelligence (AGI). “We’ve come to this view that, in order to build the products that we want to build, we need to build for general intelligence,” said Zuckerberg.

Of course, Zuckerberg’s interest in AGI is not unique in Silicon Valley. The dream of machines that rival human intelligence is pretty common among tech executives and venture capitalists. But the term “AGI” remains murky, with no agreed-upon definition of what it means and what such intelligence would look like in the real world.

Thick concepts

A new research paper by Borhanne Blili-Hamelin, a Data & Society affiliate, and Leif Hancox-Li and Andrew Smart, both at Google Research, investigates the socio-political and cultural values that underlie conceptions of AGI. They argue that AGI is not “value-neutral” and that developing AGI without “explicit attention to the values they encode, the people they include or exclude” can lead to greater political, ethical, and epistemic harm.

In most discussions, the concept of AGI presents a fundamental question: What does it mean for a machine to have intelligence that rivals that of human beings? The authors argue that the disagreement about this question stems from competing political, social, and ethical values that shape our conception of intelligence and AI as a technology. These competing values should not be ignored. Instead, exploring what authors call the “value-laden” characteristics of AGI can help make sense of the different visions of this transformative technology.

According to the authors, definitions of AGI borrow heavily from conceptions of human intelligence, which some philosophers refer to as a “thick concept.” A “thick concept” includes both descriptive and normative dimensions. Thus, intelligence refers to certain “empirical phenomena” that constitute intelligent behavior. Still, while evaluating intelligence, we also assess the “desirability of certain behaviors,” which is inherently a normative construct. Research demonstrates that socio-political and ethical values are inevitably embedded into defining a “thick concept” like intelligence because of these normative questions that arise when we try to assess the desirability of behaviors that constitute intelligence. In other words, when we define something complex like intelligence, our social, political, and moral beliefs are naturally included in that judgment.

That is why it matters that the values underlying definitions of AGI offered by executives at technology firms are rarely scrutinized or acknowledged. For instance, the researchers note that OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work,” which “benefits all of humanity.” By solely focusing on economically valuable work, OpenAI’s conception of AGI assumes other types of work are less valuable, which is a normative and value-informed choice.

Values are crucial in determining the very basis of what AGI will look like. Most conceptions of AGI focus on what tasks it can accomplish rather than understanding how it measures up to the cognitive processes that form the basis of human intelligence. This is because outcomes are easier to measure and, in turn, often determine how AI research is funded.

For example, according to an AI researcher interviewed by the authors, “to his knowledge, there was not a single grant offered in 2023 to study the topic of consciousness in AI.” This isn’t necessarily a bad thing; it is prudent to focus research on immediate AI harms and potential benefits. However, it still shows that the values ascribed to AGI determine where we allocate resources. While the authors don't try to judge what should be the focus of AGI research, they hope to show that “different practitioners make different choices about what they consider to be “practical” for achieving AGI. Choices about effectiveness and how to allocate limited resources are inherently value-laden—raising questions like for what goals, given whose beliefs and preferences, given what conditions.”

The paper discusses other value-based characteristics of AGI, including whether AGI captures individual or collective intelligence and what the term “general” means in the context of AGI. The taxonomy of values underlying conceptions of AGI shows that it is not a homogenous construct, and the future of this technology will be significantly shaped by the value-laden choices made by developers in pursuit of this technology.

Is a pro-democratic approach to AGI possible?

Rather than ignoring the values encoded into conceptions of AGI as many tech firms appear to do in their public statements, the authors propose an alternative vision for AGI development, premised on values of “contextualism, epistemic justice, inclusiveness, and democracy,” which the authors consider “vital for visions of the future of AI worth pursuing.”

Epistemic justice and inclusivity are often absent from industry-led conceptions of AGI because of an asymmetry in power. Sure, companies like Meta, Google, and Open AI believe AGI will impact the world. But communities and their needs are not necessarily accounted for in how AGI is defined, or how the technology will be deployed. The authors believe that valuing epistemic justice will ensure that AGi development goes “hand in hand with participatory, inclusive, and democratic decision-making processes.“

Ultimately, the authors argue that AGI systems can only be “intelligent” and serve people if they are also “objects of dissent and deliberative contestation, rather than systems designed from the top-down by experts only.” Unfortunately, the current state of AI development is characterized by a handful of very large companies with disproportionate power over how this technology is conceived and deployed. The future of AGI should instead give space for impacted communities to hold decision-makers accountable. That can only happen if the values underlying conceptions of AGI are reimagined to serve the people rather than the political and economic interests of a few powerful actors.

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Topics