Should AGI Really Be the Goal of Artificial Intelligence Research?
Justin Hendrix / Mar 9, 2025Audio of this conversation is available via your favorite podcast service.
The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term.
AGI is no longer just a technical goal, but a political one. People in positions of power are eager to reach this ill-defined threshold. At the launch of the "Stargate" data center initiative at the White House on Tuesday, January 21, Softbank’s Masayoshi Son told President Donald Trump to expect AGI within his term. "AGI is coming very, very soon,” he said. “And then after that, that’s not the goal. After that, artificial superintelligence. We’ll come to solve the issues that mankind would never ever have thought that we could solve. Well, this is the beginning of our golden age.”
Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:
- Eryk Salvaggio, a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow;
- Borhane Blili-Hamelin, an independent AI researcher and currently a data scientist at the Canadian bank TD; and
- Margaret Mitchell, chief ethics scientist at Hugging Face.
What follows is a lightly edited transcript of the discussion.
Eryk Salvaggio:
My name's Eryk Salvaggio, and I am a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow.
Borhane Blili-Hamelin:
I'm Borhane Blili-Hamelin. You can also always call me Bo. Just this week, I started a new role as a data scientist at TD, a Canadian Bank. I have to say because of this, this work was done prior to joining TD. It's not in any way connected to TD, and the concept of the paper and also today's conversation are entirely my own opinions. They don't represent the views of the bank.
Margaret Mitchell:
I'm Margaret Mitchell. My main role right now is at Hugging Face as a chief ethics scientist. I'm a computer scientist by training, but I work on operationalizing ethical values within the tech industry. And we don't all speak as a whole in Hugging Face, we're distributed, so similar to Bo, I don't represent the views of everyone, although I may represent the views of some people at Hugging Face.
Justin Hendrix:
I'm looking forward to hearing more about each of yous views, no matter how they're come across. And I suppose you also don't necessarily speak for all of the authors of the paper we're going to discuss today because there are more than a dozen authors on this paper, 'Stop Treating AGI (Artificial General Intelligence) as the North Star Goal of AI Research,' which caught my eye because this is the heart of the AI narrative at the moment.
When Masayoshi Son stood next to Donald Trump in the White House announcing the Stargate initiative with Sam Altman and Larry Ellison, he promised Donald Trump, 'We'll get to AGI within your term, Mr. President. Artificial general intelligence is nearly here, and we're going to do amazing things.' I don't know quite where to start with this other than to say, why is that wrong? Why is that goal the wrong goal? Perhaps, Meg, I'll start with you.
Margaret Mitchell:
So I wonder when people hear that what they think he's talking about. Part of the problem that we're getting at in the paper is that this AGI term doesn't have a very concrete meaning, but it does function as a narrative for people to prioritize their own interests. And so there are different kinds of definitions of what AGI might mean, but when we're in a position of talking through what we're going to achieve in a position of power, then this is more about just advancing technology that meets the needs or the interests of the people in power and giving it a positive sentiment by calling it intelligent, by calling it general.
While what general means is not well-defined and what intelligence means is not well-defined in psychology, cognitive science, neuroscience, all these different fields, it's just functioning as a narrative to move forward the technology that the people in power just want to move forward.

WASHINGTON, DC - JANUARY 21, 2025: OpenAI CEO Sam Altman (center), US President Donald Trump (left), Oracle Chairman Larry Ellison (first right), and SoftBank CEO Masayoshi Son (second right) speak during a news conference announcing an investment in AI infrastructure. (Photo by Andrew Harnik/Getty Images)
Eryk Salvaggio:
I would just say that I think there's an element of this AGI frame, which is, as noted, quite vague. And I think there is, from my perspective, a lot of utility in that vagueness in that, by not defining what artificial general intelligence is in real terms, you get to do all kinds of things and pretend that it's AGI. You get to point at all kinds of outcomes from various types of models and say, "This is on the path to AGI. This is sparks of sentience," or whatnot, which is a different conversation. But just to be clear, it's a similar trend.
And I think, for me, this points to almost a form of faith, whether you take it seriously or not, in terms of do people believe it or are they just utilizing it. But there is a form of faith that serves as an orientation for the AI research industry that often comes at the expense of a focus on real material needs, a focus on social concerns around the technology, and oftentimes can be used to serve as a substitute for real political deliberation and sometimes as an outright replacement for the types of political conversations and participatory conversations, which you may think are the same thing or not. For me, they are. And so I think AGI really deserves to be clearly specified so that we could say, "What do you mean?"
Borhane Blili-Hamelin:
Meg, you were describing, the vagueness in the language enables people to do all sorts of things, and that's a concern. And Eryk, the way you were describing, people are saying AGI is a goal, or they're not saying it. What does that mean? That should be the question. And when we're faced with this question, we should say, I don't know what you're talking about. But the thing I want to add just to the discussion of why is it the wrong goal is... And I just want to take a little bit of a step back. This wasn't a topic for me, personally, that was very salient until relatively recently.
The backstory for this paper, but also just personally how I became interested in the topic, was Talia Ringer, an amazing researcher, in the fall of... Sorry, in the spring of 2023 was like, "Hey, folks, there's a lot of interest in this topic. Why don't we bring a group of people together, a very large group of people together, to think about the topic and to write a position paper that's trying to think about AGI and critique AGI." And for me, my way in was not having been interested in AGI, it was instead having been interested in the very, very surprising parallels between critiques of human intelligence research and critiques of AI evaluation.
It was a surprising parallel, not in the sense that folks have obviously been thinking about everything that goes wrong when you start with imagining human abilities and then trying to measure very bad proxies or all sorts of things in machines that you think sound or look like the human stuff. There are all sorts of things that go weird, but the way we got into this, Leif Hancox-Li and I were thinking instead of what is similar in the structure of the work that goes into making assumptions about some capacity that you're trying to measure, some property of interest.
Call it intelligence in the case of humans, call it whatever you will in the case of AI evaluation. And we really weren't looking at this from the perspective of you're trying to measure something like general intelligence in machines, we were looking at the structure of how you define what you're trying to measure. And the thing that surprised us is that the AI evaluation community, on its own terms, came to the exact same conclusions that folks had come to in thinking about everything that goes wrong, but also how we should think about what goes wrong in the case of measuring human abilities.
And for me, that was the entry doors, having done that comparison. Why is AGI the wrong goal? For me, the question of what intelligence is at its core has the feature of always specifying things that are desirable. It's a value-laden concept, is the way I like to think of it. So things that are desirable, things that are of value, things that are of ethical, social, and political importance together with things that you can look at, things that you can describe, things that you can observe.
And when you're dealing with notions that have that quality of both specifying, what does good look like, but also how do you observe the thing you're calling desirable or good? And when you're looking at concepts that have solutions, you're always at the end of the day, at some crucial layer of the topic, dealing with disagreements about what is politically, socially, and ethically desirable. That feature of disagreements about what matters becomes the feature of the topic. Just thinking about why is AGI the wrong goal? The first question in my mind is, what disagreements are we having or are we maybe not having? Because we're using this very vague language that masks the underlying question of what priorities are we bringing to the table in talking about AI research and not just the social priorities, but also what research priorities, what engineering priorities.
All these questions of prioritization require explicit consideration. And for me, the first step where I just come off is, we need to be having a conversation about these underlying disagreements that aren't happening. And even before I come around and say, "Don't ever talk about AGI," which personally, in this group, I feel like I'm more on the side of, I've been surprised coming in not knowing much about the topic, looking at accounts of AGI. I've been surprised reading how many accounts I found incredibly thoughtful, and there's a lot of surprising work to me on this topic that I don't end up finding unrigorous or uninteresting or unimportant because of their focus on the concept. I was surprised by that. For me, that was a huge surprise.
But what disagreements are we not having and what questions about what matters and to whom are we jumping over. For me, the thing that's super front of mind is, why is it wrong?
Margaret Mitchell:
One of the things that Bo is really getting at here is what we call the illusion of consensus in the paper, where you are asserting something with this assumption that everyone knows what you're talking about and agrees on it being good, and that drowns out all of the other possible ways of contextualizing the AI problems, all the other ways of thinking through what's worth pursuing. And so, by putting forward this concept of AGI, we're moving everyone down the same path. They don't really know where they're going, but there's this illusion of consensus to the detriment of critical analyses of what AI might really be useful for and what it might not really be useful for.
So it's creating these exclusionary effects. It's creating this thoughtless moving forward in an ill-defined direction that really leaves out a lot of the technology that... For example, I care about, coming from someone who worked at a company that did assistive and augmentative technology, this kind of thing, where AGI is not the goal of AI in that context. The goal is assisting people. And all of the critical analyses you need to do about what the technology is doing relevant to that gets sidelined in favor of this other apparently great thing that we really don't have a clear conception of.
Justin Hendrix:
An illusion of consensus is one of the six traps that you say hinder the research community's ability to set worthwhile goals. I do want to go through, to some extent, each of these, but this illusion of consensus seems like the really big one. It's the one you put first of course, I think for a reason. I connected my mind to generally the illusion of consensus that I think Silicon Valley wants us all to have about not only the direction of AI but the direction of the planet, where we're going as a species, what we want to accomplish, why we need this technology to save us from all of the various existential harms that we might face, including climate change.
So it feels to me that this illusion of consensus goes a little further than just contestations around the term AGI or even the goal of AGI.
Eryk Salvaggio:
I think that comes to what I was talking about before about this idea of AGI being not just a technological orientation, but an ideological orientation. And to me, the orientation is a fantasy about the salvation of concentrated power alternately, right? Because it's a dream where we... There's a thing that gets thrown around with AI and AI research of solving, right? We solve creativity, we solve writing, right? And here, I worry that what we are solving is the process of consensus building that goes into politics, which is inevitably a contestation site, right?
Democracy is contestation. If you solve democracy, you ultimately end democracy because you are favoring somebody's consensus or omitting the entire method of consensus buildings. There's that question of who decides which consensus we're going to choose, whether that's Silicon Valley elites or a machine in the true science fiction sense, there are versions and definitions of the AGI mythology which says we'll ask the AGI how to solve climate change, for example, right? But that is a real techno-centric solution.
And we see this a lot in not very fancy AI. We see it in large language models. There's this misconstrual of product as if it is the goal of the process, but there's a lot of endeavors where process is the point. And I think process is the product of a democracy, much as, say, a term paper is the product of grappling with your thoughts, which is why an LLM is not good for that, for the same reason an AGI is not good for the product of a democracy, which is the process, which is that contestation, which is I kept bringing up Chantal Mouffe, agonistic pluralism, right? You need to have the site for contestation, and as soon as the contestation goes away, democracy goes away.
So if AGI is used to reach that goal, do we actually want that at all? And are we building systems that do not allow for political participation in goal setting that solve that problem? If we are, then that's a very dangerous thing. And I will say, many people are not, right? But this looseness of the goal means that even if you don't think that you're building that, you might be. This is, to me, why laying out these traps was so important.
Justin Hendrix:
You never know when you're laying the pipe for authoritarianism until perhaps it's too late.
Eryk Salvaggio:
Yeah.
Justin Hendrix:
Let me ask about the second of these problems, supercharging bad science. You lay out multiple sub problems here in this area. Why does pointing towards AGI lead to bad science?
Margaret Mitchell:
I think that one of the things we're trying to get at here is that, speaking to Eryk's point, there's this theological belief or this religious belief in AGI as being some wonderful thing to work towards to the detriment of critical thinking about all of the pieces at play. So, there generally is an under-specification of the concrete goals of why AGI should be around or what specifically that would be. There is a lack of scientific rigor. So I think most people in middle school, in the US at least, learn about the scientific methods, so you put forward a hypothesis and then you test that hypothesis and that sort of thing. Under the umbrella of the pursuit of AGI, all of that rigorous science is abandoned and justified by this belief that we're working towards something inherently good.
So, a lot of the rigor that other sciences and other disciplines have really done a lot of great work on developing are left to the wayside when it comes to AGI development in particular. And then I think another one we mentioned is around the ambiguity between confirmatory and exploratory research. That has to do with our confirmation biases, being able to actually rigorously test things that we think might be true versus just exploring to see what would be true. All of this stuff gets conflated as people are working towards developing AGI because there's just this general belief that this is a good thing to be working for, independent of scientific method or independent of scientific processes.
Borhane Blili-Hamelin:
There are three things about this section that also feel like great context to add. The first one is Leif Hancox-Li. Shout out to Leif. We made the decision, Leif specifically wanted to not be a lead author for the paper, but in practice with the last stretch, actually writing a paper, Leif just played an enormous role. And this was one of the sections that were Leif just played such a big role. The second thing is the little bit of context for why, in the context of this paper, which was really intended to reach the audience, AI researchers, there are different papers we can write on this topic.
We can write papers that are meant for a policy audience, for decision-makers. We can write papers that are meant for more of a civil society audience and try to rally people together behind what the goals should be. But with this paper in particular, we wanted to target the research community, people who often do much more technical work and don't necessarily care about all of these debates about big words. And for us, the key, the background of the paper, is the thought of the problem of distinguishing hype from reality. What is actually true, but also what can we establish on the basis of evidence-based research?
That is an area where communities don't all play the same role here. That question of providing all sorts of other communities in this space with very good evidence-based information that they can then rely upon making decisions and helping distinguish hype from reality is an underlying problem across the AI space. This is not specific to AGI. The pain points of distinguishing hype from reality is one of the top topics that the UN identifies as an obstacle in global AI governance. That doesn't come from AGI, that comes from the current things that are happening in the field of AI and the speed at which it's being deployed, the range of contexts across which it's being deployed.
The fact that so much AI development is happening not just at a fast pace but in ways that are hard to distinguish from marketing, are hard to distinguish from, often well-motivated, but the self-interest of actors who are trying to achieve their own goals and making claims. So, that responsibility to distinguish hype from reality, for me, is a special responsibility of the research community. Other communities have a huge role to play, but the research community is really falling asleep at the wheel if it can't do this. This question of what is happening with bad science in the space of AI becomes really front of mind when you start with this question of this responsibility for the research community. That's the second point I want to make about the section.
The third point about the section is that every single problem we highlight is one that exists independently of AGI. So we talk about goals being poorly specified. That doesn't just happen with AGI, it that happens all over the place. We talk about the failure to distinguish the much more engineering-oriented mindset of a lot of AI research; I'm not going to say of all AI research and the ways in which that's fundamentally different from research that is aimed at things like hypothesis testing, that is aimed at things like figuring out how does our understanding of the world line up with reality? How can we rely on evidence to figure out whether our understanding of the world lines up with reality?
And that is the core. There are many ways of thinking about science, but that is part of what makes science distinctive and important. And that pain point of all the things that go wrong in the AI space through jumping over or oftentimes just saying, "We don't care about these questions that have to do with..." And I'm not saying everyone does that, but this is a pervasive problem that so many researchers have been thinking through. Pseudoscience in the AI space isn't the language people use in this at this point. And the same with the distinction between...
That can be a more granular and pointed way to think about this, but there's also just, it's a very important question of if you're trying to figure out does our understanding of the world line with our observations and questions about where do you sit in relationship to this process of figuring out whether the evidence you have, the things you can observe, the things you can test line up with your assumptions. It's really crucial to ask yourself, "Where do I sit in that process? Am I at a stage where I haven't yet figured out what assumptions I'm even making about the world?" That can be more the exploratory stage.
"Am I at a stage where I've very thoughtfully, pointedly, in ways that I can go on to pressure test and ask, 'Does this hold up?' I'm in a position to do that. Okay, I can ask, 'Does this hold up?'" That's more the confirmatory stage. And again, that's a problem that's pervasive in the AI space, but the AGI just makes much worse through the vagueness of the language.
Eryk Salvaggio:
If I may, I also, I have to say, I think one of the particular astute aspects of this section of the paper for me, something that I learned from this process was this, reminded me of this anthropologist, Diana Forsythe, who in the nineties went and studied expert systems and found out that really what people were doing with this form of AI back then was answered by the question, "Does it work?" As opposed to the many types of questions that you might be asking, she, as an anthropologist, had alternatives. But I also think that the scientific research community has a different question, right? There's a different orientation to questions.
It isn't, "Does this work?" It's, "Does it work for the reasons you thought it would work? Is it doing the things you hypothesized it would do based on the thing you are testing?" And those are very different questions from, "Does it work?" And yet, whenever we get a new AI model, a new LLM, a new whatever, the question is, "It works," or I guess the answer is, "It works," and then there's all kinds of backward reasoning as to why it works, much of which is way out there as far as I'm concerned, in terms of we're this much closer to it being a human child.
And that is, I think part of what contributes to this hype cycle of misunderstanding, is that the thing does something and so we assume that is a verification of any theory that might be out there, that might exist about why AI operates the way it does. And so that's why I thought this is a particularly useful trap to identify and think through.
Justin Hendrix:
The next trap you identify is one around presuming value neutrality, that is, framing goals as purely technical or scientific in nature when political, social, or ethical considerations are implied and/or essential. It feels like we've talked about that to some extent in this discussion already. And then this goal lottery, this idea of incentive, circumstances, luck, driving the adoption of goals even without scientific engineering or societal merit. It seems like, Eryk, you've just touched on that a little bit there.
I want to get to this idea around generality debt because I think you're also hinting at this with the comment around the human baby. I think the thought of AGI is that, eventually, we'll be able to essentially mint versions of our own brains or maybe brains that are even better than ours. And ultimately, that's the thing that we're trying to do, it's to get something that's as pliant as the human mind and that can really solve any type of problem. You say that essentially allows us to postpone engineering, scientific, and societal decisions. What do you mean by that?
Margaret Mitchell:
This is something that I really have a bee in my bonnet about, that people use this term 'general,' even though we know that how models are trained are not in a way that they have access to something general. They have access to a lot of different things. And those lots of different things are what's called general. I don't know about how much people who listen here know about the details of machine learning training, but basically you take data from newspapers and data from magazines and data from blog posts and social media posts and NSYNC fan fiction and pictures of cats and all these things, these are all things that you put into a system to train on. And by not specifying what those things are, by not looking at them and not using any preconceived notion of what they should be in order to guide curation, you just call it general.
So we have this concept put forward in AI, AGI research, of making systems that are general, which is really just putting a blanket over a lot of different things, a massive diverse variety of different concepts that we're just not defining and not critically engaging with. And so, what happens there is that you have systems where we can't make reasonable predictions about how they might work because we haven't done the homework of actually looking at what specifically they've learned. It's just been under this umbrella of generality.
This term, generality, is something that should be composed and is composed of a bunch of different components. Generality isn't something in and of itself that exists. It is a function of many different component. But by not actually grappling with that, by not actually dealing with what those components are and what that might mean for system behavior, we push off all of these questions about what we should be curating for, what are the best things to be curating in order to have the kinds of systems we want to have? What are the specifics of the systems we want to have? We just sweep it all under the rug of generality and so don't critically engage with the very diverse components that make up what a general thing would be.
Justin Hendrix:
The last one of the traps that you identify is what you call 'normalized exclusion.' So this comes from excluding certain communities and experts from shaping the goals of AI research. It feels like some of the individuals who've experienced that are among the authors of this paper. But let's talk about that one just for a moment. AGI pushes out a lot of ideas, a lot of potential science, and a lot of other types of goals that we might have for technological development, but it also pushes out people.
Borhane Blili-Hamelin:
Eryk, I have a question for you here, or I don't know. I want to reflect back something you said when, and I hope it's okay that I mention this, we just released a big report on generative AI red teaming, and one of the amazing people who asked to not be anonymous, who we interviewed and had so many amazing things to say and was incredibly helpful to our research was Eryk. And one of your observations about red teaming, which is a testing practice that's become incredibly popular, but also that's become oftentimes very misconceived in the context of general AI.
One of your observations about red teaming, it feels very relevant to this question of who gets to decide what we're even aiming for here and what happens when instead of figuring out what are the priorities, but also whose priorities, but also who and what process should we rely on in setting priorities when you just carve out a special and very abstract, but also maybe a topic that many people don't care about, right?
The thing about AGI as a topic, and if that's what we rely on in defining what we're even looking for is you're also going to just lose a lot of people's interest, assuming that's where you start, right? But also, you might give yourself the opportunity to not ask the right questions. So one of the observations in that setting was, when you're doing red teaming, you need to start by asking yourself, and I might be misquoting you here, "Do I understand how this person thinks? And if you do, you've got the wrong person." It's just a wonderful observation. I don't know. I feel like it's relevant here.
Eryk Salvaggio:
I think this is actually a really useful case study, to be honest, because this is a frame in which we are talking about exclusion from industry. This is an example we're talking about red teaming, which is, you get a bunch of people together, you tell them to try to do something to, in this case it was a large language model. And a lot of that was, the contours of it were predetermined, who was able to participate was self-selected by who was in attendance at the conference. The methods that they were able to do was determined with certain guardrails that were placed on access to the models and who could access them, how long, and what was prioritized.
And we were there as artists and as people who engaged in hacking diffusion models and large language models. And nothing in that setup made any sense to us, in terms of how we approached these models, how we approached these systems, as people who are engaged in, trying to understand what harms come out of them. And it was illustrative of a lot of the stuff that does come across, I think, in terms of who do you talk to about the goal setting? But then there is also this bigger issue that is being framed in this section of the paper, which is entire disciplines.
It's not just people, specific people, it's entire disciplines of thinking that may have a different frame on artificial intelligence. There are certainly aspects of academia and academic research from not this hallowed interdisciplinary enclosure that has become AI and AI research. And then there's also in the technical development space, I think, which is mentioned in the paper too, which is who are the people who have the access to do these large-scale training? Who are the people who have the expertise to pay people or the funds to do that. And who has the blessing to be able to access these resources?
That narrows down the field significantly. So it's self-selected by interest. You've got to be interested in AGI to start working in AGI. And to be interested in AGI, you have to buy into some of the myths that are already out there. And then who do you reward? It's people who have bought into the myths. And who gets to work on it? People who have been rewarded. So there is this siloing, even though it is definitely a transdisciplinary research field, there is a real siloing about which disciplines are the transdisciplines in this case. Sorry, the interdisciplines. Transdisciplinary would be the ideal.
Margaret Mitchell:
I think that speaks to some of the other problems that we were highlighting in the paper as well. So there's the ideological orientation towards AGI. So if you're a believer, you can work on it, but if you're not a believer, if you're questioning it, then you're not really invited to participate. And also this idea of generality where if you don't break down generality into its subcomponents, then you don't see a need to include other disciplines, because general means it can do medicine and math and reading and arithmetic, all these things, but without critical consideration of these different subcomponents and disciplines, then you don't actually need to interact with these people at all or learn anything from them at all because the system is general. It does all of the above.
So there's really a disconnect between what goes into making something that's critically well engaged with all of the roles that should be playing or people hope for it to play, and then what's being sold and put forward by those who follow this ideological idea of AGI as the North Star goal.
Justin Hendrix:
In this paper, you make multiple recommendations. Some of them, I think, won't surprise most Tech Policy Press listeners, you call for greater inclusion in goal setting. You say that pluralism of goals and approaches should be considered worthwhile or more worthwhile. And of course, you want folks to be more specific about what goals they're pursuing, not just deferring to this squishy concept of artificial general intelligence.
But I have to ask on behalf of my listeners who might be wondering, what's the bottom line for policymakers here? Assuming that there are any policymakers that want to listen to this right now, it does seem like to some extent, especially in this country and perhaps now maybe in Europe as well, that there's a tilt towards just buying the corporate line and that this is in fact the North Star goal, whether you like it or not, but what would you tell policymakers about the ideas that are here? What would you hope that they would take from your recommendations?
Borhane Blili-Hamelin:
The first one, for me, for the policymakers is, instead of asking words to the people who have a lot of clout, a lot of sway, who are maybe the loudest voices in the room, who also maybe have a story that feels palpable, a story that feels exciting, instead of asking where are and who is telling me an exciting story that gives me dreams for my country and so on and so forth, instead of asking where are those shiny stories being told and what are they and what can I latch onto in terms of shiny story, ask yourself, "What kind of consensus matters to you as a policymaker."
And also, when you're confronted with these shiny stories... Because fundamentally this question of AGI, what's happening with goals for AI research, we're not talking here about formal organized structures, with some exceptions, there are companies who have in their charter AGI, so there are situations where all of a sudden there's a process. All of a sudden there's formal documents that make AGI part of very tangible structure, but that's an exception. That's not the rule. For the most part, this topic is really part of the intangible, informal ways in which all sorts of actors in the AI space approach their relationship to goals.
So it's part of the softest, most squishy components of organizing our relationship to goals. Another way to think about it is, it's part of the most informal dimensions of governance of how groups organize achieving their goals. So ask yourself as a policymaker not, "Where are the stories that I can latch onto?" Ask yourself instead, "What kind of consensus matters? When does consensus matter and how do I get there?"
Justin Hendrix:
It's a powerful elixir for politicians, right? I'm going to give you the ability to instantly mint brains. Artificial general intelligence will give you the ability to have an infinite number of scientists, soldiers, workers. We're going to solve all these big, hairy social problems. We're going to address climate change. We're going to fix all the problems that seem so complicated. If you're some politicians who's dealing with the polycrisis, right? You've been to Davos and they've sold you on the polycrisis, this is some powerful medicine. I don't know. The billions put behind this vision, are you in any way confident that either the AI research community and/or the broader political society will put aside AGI as a goal?
Margaret Mitchell:
I think that given everything we've discussed here, people will declare that AGI has been achieved. There's a massive incentive to do that, if for no other reason, because of all the money that has gone into it already. And so, I think we're in a position now where there are going to be organizations in the foreseeably near future that say that they've reached AGI and they're going to try and monetize that in various ways. I would encourage policymakers to instead think about, "What should this technology be useful for, specifically? And for each of those things, what needs to be demonstrated in order to assert that the technology is useful for that thing?"
Regardless of the grand claims and this notion of intelligence being wonderful and generality encompassing everything, get down to the brass tacks. What are the specifics of what this technology should be useful for and for each of those, what needs to be demonstrated so that we know it is useful for that. I think policymakers can really help guide the technology industry there.
Eryk Salvaggio:
I would just say that it's important to remember that AGI is literally not a technology at the moment. AGI is a political organization. It is a way of organizing society. And if you look at definitions of AGI, you'll often see that they tend to reflect the vision of political order that they are supposed to bring by anyone who is building that, ranging from a evaluation of a machine that can raise a million dollars out of a $10,000 seed fund, right? That tells you specifically, not about the technology, but about the vision of the organization of society that this technology is supposed to be able to bring about.
And so if I were a policymaker, the question that I would ask to anyone who's talking about AGI is, "What is this as a political idea?" Stop treating it like a technology. Start treating it as a political proposal and ask yourself if the proposal is something you would buy, if they were bringing any other technology or any other excuse to your desk.
Borhane Blili-Hamelin:
I feel like the two of you, in different ways, you're bringing about this question of, politicians are saying they need help distinguishing hype from reality. Where has that gone? Keep asking for help distinguishing hype from reality.
Justin Hendrix:
We'll continue to do that on this podcast, I'm sure. We'll do our best at least, and hopefully, with each of your contributions, generally and also occasionally in Tech Policy Press, I'm grateful for those, we'll keep at it. I thank you all for taking the time to speak to me about this work. Bo, Margaret, Eryk, thank you so much.
Eryk Salvaggio:
Thanks for having me.
Margaret Mitchell:
Thanks for the opportunity.
Borhane Blili-Hamelin:
Thank you so much.
Authors
