The Race for AI Supremacy
Justin Hendrix / Nov 17, 2024Audio of this conversation is available via your favorite podcast service.
Parmy Olson is a Bloomberg Opinion columnist covering technology regulation, artificial intelligence, and social media. Her new book, Supremacy: AI, ChatGPT, and the Race that Will Change the World tells a tale of rivalry and ambition as it chronicles the rush to exploit artificial intelligence. The book explores the trajectories of Sam Altman and Demis Hassabis and their roles in advancing artificial intelligence, the challenges posed by corporate power, and the extraordinary economic stakes of the current race to achieve technological supremacy.
What follows is a lightly edited transcript of the discussion.
Justin Hendrix:
This book is a story of two individuals at its heart. You tell the tale of Sam Altman and Demis Hassabis, and you take us through their backgrounds. You take us through their early days. Why did you feel it was so important for us to understand the teen years of these men before we get into the race for AI supremacy?
Parmy Olson:
Well, first of all, I mean as a journalist, I just find both of these men very interesting and I think they are two of the most important builders of AI right now. And I also wanted to tell the story through them and their experiences because I think one thing that we sometimes forget about technology is that it is designed by human beings. Every aspect of every app or widget that you use on your phone is there because of a design decision by a human or some choice that's been made by a product manager. And I think to really understand where we fit within tech so that it doesn't just disappear and become abstract and somehow something that we can't really control, I think it's important to actually shed light on the people behind this technology and tell their stories. So that's what I set out to do.
Justin Hendrix:
Tell me what stands out about Sam Altman and his upbringing in St. Louis? You talk about his high school years and how he learned a thing or two being somewhat of a social justice activist in his high school. What do those early years tell us about the man he'd become?
Parmy Olson:
I interviewed Sam's... sorry, I'm based in the UK, so I would say headmaster, but in the US he's the principal of the school that he went to, which was a very prestigious school, a private school, and Sam was just one of the, he said the best student he ever taught. And one of the things that really made Sam stand out from his other classmates wasn't just that he was leading the polo team or that he started the LGBTQ support group.
And it wasn't even just that he stood up in front of his entire school and said that he was gay and also that there were a lot of gay students who needed support, which was an incredibly bold thing to do. It was also that he wasn't very good at aligning himself with figures of authority, and he would frequently go into the office of the principal and talk to him whenever there was something that bothered him about something that a teacher had done, something that maybe the coach had done or the fact that a Christian group within the school had boycotted the assembly where Sam spoke about his LGBTQ support group. He was incensed by that and went and spoke to the head teacher and complained and said, "Those people should be marked as absent."
But that was just a frequent thing that he did. He was very good at connecting himself with people and authority with leaders. And I think that was something that was a through line through his career, including when he got older, dropped out of Stanford and joined the Y Combinator Accelerator for startups and connected with another mentor, who was Paul Graham, one of the most important figures in Silicon Valley. Talking about Sam's High school years is not just interesting, but I think it gave a little bit of insight into the person that he would become and why he would become so successful and powerful as one of the founders of OpenAI.
Justin Hendrix:
He's someone that mentors, that people in power almost want to invest in. They want to help. He's good at creating the impression that he's a vessel for their interests.
Parmy Olson:
Yeah, perhaps. And I think that's something that's been said about Sam is that he knows how to give people what they want and he can do that, whether that's in a business meeting or with a regulator or a government official. I was really struck by, I think it must've been about a year ago now after ChatGPT came out and Sam Altman went on this world tour speaking to leaders of the European Commission, the European Union, heads of state in Europe. He testified before the Senate. He had meetings with senators and congressional leaders. And the feedback was overwhelmingly positive that there was this sense that here was a man who wanted his company to be regulated and he wanted to work hand in hand with these government officials and leaders.
Even though ultimately, of course, as the leader of a private company, something that OpenAI is becoming more and more now as it moves towards restructuring itself away from being a nonprofit, that is something that a typical company leader would not want to do. They want to avoid regulation as much as possible. But a real frequent and interesting tendency of Sam Altman is to lean in towards leadership and authority and also lean towards controversy and lean towards those difficult things so that he can inoculate himself against any potential challenges that could come up ahead after that.
Justin Hendrix:
So you put this vision of this, what you call the scrawny but placid entrepreneur in his late thirties, with a former chess champion in his late forties, obsessed with games. What should we know about this other main character?
Parmy Olson:
This other main person is of course Demis Hassabis, who is older than Sam and got into computers about a decade earlier, but in a big way. He started off as this prodigy as a chess player. He was a champion at the age of 13, one of the highest ranked junior chess champions in the world. And he was also obsessed with games. And when he was a teenager, he had this epiphany during a chess championship that there were all these people playing chess and they were wasting their brain power when they could be using their brains to solve major global problems or cure disease. And he felt his way of doing that and using his brain was to build artificial intelligence to find a way to extend the power of computers in such a way that our own brains could be augmented to solve all sorts of societal problems.
And it's interesting, he tried to marry these two passions of games and artificial intelligence, and he started by founding a game design company that showcased the power of AI systems within these games. And unfortunately for his company, the games were just so technically ambitious that they just failed under the weight of all the time and effort and the complexity of the technology behind them. The game play actually was not very good. The games were quite boring. And so Demis's first real entrepreneurial venture had to shut down and he ended up doing... wandered in the wilderness for a little bit. It was an incredibly difficult time for him because he and his co-founders had all graduated from Cambridge and he'd been just winning things his whole life and here he was now this failed entrepreneur. Or especially in the UK, if you come out of a failure of a startup in Silicon Valley, you just move on to the next thing, not so in the UK. It's a lot harder to bill yourself as a serial entrepreneur on this side of the pond.
But he ended up doing a PhD in neuroscience at UCL, University College London, which was very well received by the scientific community, and from there went on to start DeepMind, which was, for a while, the leading AI lab globally, I would say.
Justin Hendrix:
This is the story of how these two men made their path, how they were able to raise huge sums of money to pursue their different visions of artificial intelligence, but also I think how their imagination for what AI could be, what it could do in the world, ran squarely into the vortex of the technological imagination of big companies like Google and Microsoft. How do you think about that as a framing of this book? It seems to me that you're telling a story of two people, but you're also telling a story of Monopoly, this general context that we exist in where the end of the road would be to run squarely into the mall of these big tech companies.
Parmy Olson:
I think that's such a great summary of pretty much one of the key takeaways from the book, and I love the word vortex. I like to frame it as a gravitational pull of these tech companies. And I think in a lot of ways, Sam Altman in the press, he's been painted as this kind of villain character now to some degree. Demis Hassabis, not so much, I don't think he's quite as well known publicly. But ultimately what happened with both of them is they both set out with incredibly, almost utopian, idealistic, altruistic goals to build AGI, AKA, artificial general intelligence or AI that is as capable as the human brain. And they both set out to do that with this public notion and objective of when we build this, we are going to use it to solve major societal problems, cure disease. Demis would talk about solving climate change. Sam talked about using AGI to usher us into an age of abundance, essentially elevating the wealth of everyone on earth. So these were like, this was what they were aiming for.
But exactly as you say, Justin, over time building AI is just expensive and they were caught up in this vortex of their noble ambitions essentially gave way to these partnerships that they had to make with big tech companies. And so I don't see any real Machiavellian intent for either of these figures, these visionaries, some might say they sold out and some might say they compromised in a big way on these ideals that they had. In order to reach that end goal to build the AGI, they had to let those humanitarian goals fade into the background because the companies that they ended up working with, who they needed to work with to get the funding to build AGI, they are optimizing for profit and growth, not for societal benefit in the way that Sam and Demis had started off wanting to do.
Justin Hendrix:
In chapter 11, you talk more about how they became bound to big tech. I mean, we've got OpenAI, which of course famously operated as a nonprofit or an extension of a nonprofit, and then in the case of DeepMind, this dalliance with Google and the constraints that created for it. Was it ever possible for these companies to maintain this sort of general ethical vision that they purportedly started out with? Or were the capital constraints always going to create the types of conflicts that you talk about?
Parmy Olson:
I think I almost feel like in the world that we currently live, maybe that would've always been the inevitable outcome. Maybe in a parallel universe, if you believe in the parallel universe theory, there might be a version somewhere where they did manage to set up a responsible governance and oversight of their AI companies and they didn't get sucked into the gravitational pull of these huge tech companies. But ultimately that's not what happened.
And it's great that you bring up governance. To be honest, that's why I named the book Supremacy because this is really a story about control. And if I could have named it Governance, I would've called it Governance, but nobody would buy a book with a title of Governance because it's a boring title. But really this is what it was about, this is both Sam and Demis when they started their companies, both... I was always intrigued by the fact that they were both concerned not about killer robots, but about this corporate control of AI and that they.
Both tried to set up unusual governance structures. So I think it's quite well known that Sam started OpenAI as a nonprofit and eventually over time restructured to become more of a for-profit entity. What's less well known, and one of the impetus for me to write the book is that DeepMind also spent several years trying to break away from Google and become a separate entity, which was almost like a nonprofit. They came up with this governance structure they called a global interest corporation. It's sort of an NGO/UN style organization. And they wanted to be governed by an ethics board so that if and when they ever reached AGI, some really high ranking people, we're talking former politicians, they reached out to people like Barack Obama and Al Gore, people connected with the UN and various universities, and some of these people even agreed to be on this board.
That is how concrete the plans were at DeepMind to break away, but ultimately Google did not let them spin out. But that was the goal, they actually hoped to spin away from Google in an effort to protect their technology from an organization that had so much power. Tech companies now have this potential to reshape jobs, education, healthcare, even warfare, but these are profit driven companies who are only really accountable to shareholders. There is no proper regulatory oversight of AI or AI models, not yet anyway, we could talk about the EU's AI Act. That is a big reason why they wanted to spin out.
Justin Hendrix:
It's not just these two characters of course, that you chronicle here. There are a number of others that come in, many of them very much interested in this kind of brinksmanship, this chess game that's going on between the big tech companies, the big pools of capital, the big ambitions to change the world and change society. I'm talking about folks like Peter Thiel, Elon Musk, Mark Zuckerberg, Eric Schmidt. They also come in throughout. How do you think about that cast of characters? They're not in some ways the prime actors, but they appear to be pushing and pulling strings and very much shaping the future of artificial intelligence.
Parmy Olson:
I mean, they all have an important role to play. I mean, Mark Zuckerberg's an interesting. One recently just because of his push into, I'd say so-called open source AI models. Larry Page also was critical as the deal maker with Demis. He was the person that really built a relationship with Demis to buy DeepMind. And Larry has this computer... his father was a computer science professor and had this kind of involvement in artificial intelligence, and Larry was very, very interested in that and very, very interested in the building of AGI.
And Elon Musk, another important character and all this of course, as he often is, he was one of the co-founders of OpenAI, and it was very much also his impetus to make it this open platform that was not beholden to any large tech company. In fact, one of the reasons he wanted to start OpenAI was because Google had bought DeepMind and he knew that DeepMind was working on some very cutting edge stuff. He'd seen some of it because he had also been an investor in DeepMind. He was one of the first investors in DeepMind. In fact, at one point, this is also in the book, he tried to buy DeepMind with Tesla stock, but the founders, and we know Elon of course also, publicly been reported, he tried to become the CEO of OpenAI and take it over. Founders again, spurned him on that.
But yes, there is a lot of brinkmanship going on between people like Elon. And a lot of conflicting and buffeting ideologies as well, effective accelerationism, the long-termism. When I was talking to people for the book, people who had worked at DeepMind and people who had worked for OpenAI, there's almost this cult-like reverence for the idea of building AGI. And People who work in these companies almost exist in this bubble where this endpoint is so important and they believe in it so much that it's almost hard to really see the forest for the trees when they think about what the other unintended consequences could be.
Justin Hendrix:
I remember the reporting on Ilya Sutskever apparently leading a chant at OpenAI where he would say, "Feel the AGI, feel the AGI," almost like this kind of invocation of a spirit that we're going to unleash on the world somehow. How much of this is essentially these men conjuring a new religion?
Parmy Olson:
Oh, 100% it's like a new religion. When I was interviewing people for the book, I spoke to one guy who used to be a hardcore effective altruist, and he had also been working with companies who were trying to build AGI. And he'd been brought up as a Southern Baptist, and he said it was so interesting coming from that background because he felt like a lot of people who were working in this industry and trying to build AGI, he called AGI rapture for nerds. He was like, it's basically like the rapture because in the same way people talk about it in a religious setting, it's often very abstract, theoretical. We don't exactly know what it's going to look like or when it's going to come. And again, in this case, we don't know what AGI is going to look like or when it's going to come. It's also often like it's five years out. And then five years comes along, it's still five years out. This was a common refrain and in DeepMind that Demis would always say, "Oh, it's about five to 10 years out."
And there are some really wild ideas held about what would happen when we got AGI. So for example, I spoke to one former executive who said he remembered a conversation between some managers at DeepMind where they were saying, "Actually, we probably won't need to worry about raising money in the next few years because when we have AGI, the economy will be abolished. We won't even need money." And this was not said with any hint of sarcasm. They fully believed this. And actually even if you read some of the recent postings from Sam Altman or Dario Amodei, the co-founder of Anthropic, you see some of these notions about how AGI is just going to completely transform our economy, how we use money will have to completely be reinvented. So maybe some of these notions are quite wild and outlandish, and they did seem to, for me, there were echoes of what you might find in some religious groups as well.
Justin Hendrix:
Maybe that's one of the reasons I've always felt that way about this. I was also raised in a Southern Baptist community. When I read those things like Sam Altman's claims about abundance, it very much sounds like the promises of New Jerusalem, the promises of heaven right around the corner. It's like the promised land.
Parmy Olson:
But it's so vague as well, what does abundance even mean, right? And there was an interview he gave with The New York Times, I think it was a year and a half ago, where he talked about when we have AGI, there will be trillions of dollars of new wealth just somehow appearing in our economy. When it comes from someone like him who is so respected and so seemingly grounded in engineering principles, it's hard to just dismiss it, and yet it actually sounds so crazy in a lot of ways. But that's interesting that it was almost triggering for you.
Justin Hendrix:
In chapter 15, you say, "10 years ago telling someone that you were building human level AI systems was on the same level of crazy as explaining your plans to be cryogenically frozen." Interesting that some of the folks who are building these systems are also interested in various life extension king of ideas. But you also go on to say that people have begun to take these guys seriously. But there's also a group of folks in your book that don't take them seriously. In fact, they've emerged as a kind of intellectual counterweight. You talk about people like Dr. Timnit Gebru, Margaret Mitchell, Emily Binder, folks like Meredith Whittaker. How do you think their role has evolved here despite the strength of their ideas and their own efforts? Do you see this effort to contain the hype round AI to contain some of this fervor and religiosity? Is it catching wind among policymakers and elites in the same way that the promise is?
Parmy Olson:
I'm sure that it has to some extent. How much is hard to say, but they absolutely did an incredible job of just making a lot of noise about the problems of generative AI. So they appear in a chapter I entitled The Fight Back because there was this group of women at Google and University of Washington, and some male researchers as well. But I say women because they were pointing out potential gender biases in these models. As a woman, you might have skin in the game, and that's why actually if you look at a lot of AI ethics researchers and the people pushing for this, they do tend to be women and people of color. And this was such an important thing that they did, which was writing this initial research paper called the Stochastic Parrots paper, which was basically an early warning about some of the potential unintended side effects of these very large language models that companies like Google were building.
They'd put this paper out well before ChatGPT came out. And at the time Google was working on a large language model called LaMDA, which it didn't release. It kept all of this proprietary until OpenAI sparked this arms race with the release of ChatGPT, and then Google just jumped in. But prior to that, yes, this group of computer scientists and researchers sounded this warning. And I think it got a lot of press, more so because two of those researchers eventually got fired a few months after their paper came out, one of them being Timnit Gebru and the other one being Margaret Mitchell, and that just drew even more attention to the concerns that they had that they were raising about potential misinformation, again, about the entrenchment of potential stereotypes of these models.
Now, what I find interesting is that in the last, let's say year and a half, just when I look on social media, and it might just be me because of my filter bubble, I am not seeing those voices as loudly as I did see when ChatGPT first came out and then the first eight to nine months after that. I don't know about you, Justin, but it just seems to have lost a little bit of traction. Having said that, we do have regulations coming in like the European Union's AI Act that is addressing issues like bias in models, stereotyping, fairness, cybersecurity hallucinations and things like that. It's an issue that has not been solved.
There was a recent study that came out from a couple of groups that are working with the European Union on the implementation of the AI Act. They basically made this framework that tested all the main generative AI models for how well they complied with the act, and one of the places that companies like Google and Microsoft are really falling down is on fairness. So this issue of ensuring that their models are not biased, that has not been fixed, it's still an issue, and these companies are going to have to work on that if they want to be in compliance with this act when it really comes into force, I believe it's next year, or I think there's a final deadline of August 2026, but you might know that timeframe a little bit better than me.
Justin Hendrix:
You chronicle in this book a little bit about the rollout of Character.ai, which in some ways feels like to me, this representation of the extreme end seeing some of these ideas and people that came out of Google, which as you mentioned had previously, maybe it's been slightly more careful, or it had been trying to be slightly more careful. Then you have something like Character.ai, which post ChatGPT, just, let's get it out there. Let's move fast. Let's not have these restraints that might be otherwise applied within Alphabet. Just last month, we saw this first lawsuit around the death of a teenager who apparently developed this, I suppose, parasocial relationship with a Character.ai chatbot. I'm interested in this question of whether you think things are going to speed up now more towards harms. I mean, there's so much capital at play. It seems like there's only one answer, and that's go faster.
Parmy Olson:
Well, the optimistic part of me likes to think we as a society surely have learned from what happened with social media, that we let these companies grow and optimize for growth and scale and do so without proper checks and balances. And what ended up happening, of course, was the spread of conspiracy theories, polarization, addiction, mental health problems, particularly for teenagers who are on Instagram. We learned a lot from the Facebook whistleblower, Frances Haugen about that. And so I hope that we have learned that if we let technology companies run rampant and deploy systems out into consumers to the point that the toothpaste is out of the tube, you can't put it back in because people start using these systems and they start to rely on them, and then it becomes very hard to regulate those systems. But you would hope that we've learned from that.
I don't know that the public has, I think the public sees services and products that are convenient, that make their work more efficient, that makes life more seamless, and they embrace them, without question really. Because it's hard as a consumer, you can't vote with your feet when it comes to tech companies. There was this great article by New York Times writer, Kashmir Hill. When she was with Gizmodo she did this piece where she tried to spend a week where she didn't touch any of the big tech companies in her life, whether it was Google, Apple, Facebook or Amazon. And it was impossible to live her life, to do her job because every company that she had to deal with, if she wanted to go to a website, that website was probably hosted on Amazon's AWS, couldn't do email, couldn't use a phone. So we're completely reliant on these companies. And whatever they have coming down the pipe we're going to use.
But I do hope that in spite of the potential harms that could come, we do have regulation coming that is actually being based out here in the UK. I talk a lot to people on antitrust officials over here and analysts who are looking really closely at the AI Act from Europe, and there's a lot. I think it's quite promising. It's not perfect. It's vague. And absolutely the criticisms of that are, I think, even the EU AI Act's own architect has complained about that, but hopefully that's going to improve with better standardization over the next year and a half from standards bodies and just a more concrete, practical explanation of how to comply with the act.
But in terms of potential harms, gosh, where do you even start? I find the potential addiction to AI companions really compelling. I've got this morbid fascination about it. I actually am working on a column right now about the future of human AI companionship because there's toys now for kids that have GPT-4. Character.ai, I interviewed some teenagers who literally will spend five to seven hours a day on Character.ai and adults who use these romance bots on apps like Replika. And the really interesting thing about all these services is all the AI chatbots are incredibly agreeable. They're like the perfect partner, the perfect playmate, the perfect friend. And when things become so easy like that in a relationship, I think that might actually make humans harder to relate to because we're not agreeable, we actually have hard edges. And so I wonder would that be an unintended consequence? I don't think we could have ever predicted the unintended consequences of social media, and that's why it's so hard to really know what the potential side effects, the toxic side effects could be of AI. But maybe that's one.
Justin Hendrix:
One thing I'm struck by in this book is what we've talked about, this feeling of there being a vortex, this gravitational pull, as you put it, that's pulling people towards investing more capital, governments investing more in procurement in artificial intelligence. I was struck reading the Biden Administration's national security memo. I was reading it with this lens of what does this document say about how the government currently thinks about its relationship to the private sector when it comes to AI? And I found reading your book, I had a similar sense of this feeling of governments being, in many ways, beholden or maybe in the future being dependent on these firms in order to deliver on what they promise. I can understand if you're a politician at a certain level, you're looking out at the world and all you're seeing is problems, how to get people benefits checks efficiently and on time, how to make government work, and also how to get reelected. It must be pretty attractive if someone comes along and says, "We've got this sorted. We'll deliver abundance for you. All we need is some capital and an advantageous regulatory environment."
Parmy Olson:
Just look at the incredible amount of influence that Elon Musk has now on government officials, whether it's through Tesla. And I think some of his Tesla company's policies have actually helped reshape US government policy around electric vehicles. He's got a direct line to people at the Pentagon because of Starlink. He's like the new NASA with SpaceX. This is an incredible amount of influence for one person, let alone a technology maverick and entrepreneur to have on government.
I thought it was also really telling when Sam Altman did his testimony before the Senate a few months after the release of ChatGPT. And that testimony, that meeting was meant to address congressional concerns about AI, but what ended up happening, to your point, I remember one senator was like, "Sam, would you like to be the regulator?" At one point, he actually asked Sam Altman that question, and Altman said something like, "Actually, I like my job, so no, thank you." But I think that really illustrates this almost strange level of dependency that many government and lawmakers have towards Silicon Valley, perhaps influenced in part by the army of lobbyists and political operators that tech companies have in Washington DC because they can absolutely afford it. But I think also just because these companies are, again, like our infrastructure, they are modern day utilities.
Justin Hendrix:
Yeah, there are these individuals that show up to Capitol Hill, and it's notable who gets called by their first name during hearings. Sam, as you point out, he's addressed by lawmakers during that hearing as Sam. Brad Smith from Microsoft is another character. I think of him in my mind as Senator Brad Smith. He's constantly on Capitol Hill and everyone calls him Brad. Kent Walker from Google is another one.
I want to ask you another question about your diagnosis here around the scale of these firms and the gravitational pull they create. And something else you've been writing about lately, which is Google's possible breakup, and depending on how remedies get sorted in the search antitrust case. Of course, we've got another case being brought by the Department of Justice at the moment around ad tech. There is some uncertainty about how those will move forward. But do you think it's possible that any of this movement to address monopoly in the cases of Google, perhaps in the case of other tech firms, there are cases right now against Meta in the US and abroad. Do you think any of this is going to work? Do you think that ultimately governments will act to contain these companies? Or will they ultimately concede the promise of these firms and what they mean for our future and our national security?
Parmy Olson:
I wish I had a definitive answer, but I think it's really hard to say. But if you had been talking about breaking up Google a year ago, you probably would've been laughed out of the room. So the fact that antitrust regulators are seriously talking about structural remedies, I think that makes it certainly plausible that a breakup could happen, all the more so if the European Commission joins in the effort. If it's just the DOJ acting alone, I think it's going to be really hard to make it work. But if the DOJ also brings their case and Judge Mehta in August next year when he rules on the case where they called for structural remedies calls for the same thing, I think it could just have all the more impact. It's all the more likely to happen if the European Commission also calls for and pushes for that as a penalty.
Because fines, you know this better than anyone because you've been covering this for so long, but I remember the fines just don't work. It's just a cost of doing business for these companies. And there was one point where Meta, AKA, Facebook was fined 5 billion by, it was either the European Commission or the DOJ, and the stock went up. It just has absolutely no impact on their share prices. Just like in the last two years since ChatGPT came out, I calculated this recently, but the six biggest tech companies, their aggregate market capitalizations grew by $8 trillion. That's just how much they grew in two years. And so none of these regulatory speeding tickets really are having much an effect. So I'm hopeful that some kind of breakup could actually lead to a more competitive market and just put that power in check, whether it's spinning off the ad tech business or Chrome, I think that's still to be decided.
Justin Hendrix:
Feels uncertain to me whether that'll happen and how many years it might take, strikes me will be in a completely different place on the AI conversation by then, possibly a much different generation of these technologies. I'd be interested in your perspective on whether these claims around AGI should be taken seriously. You've talked to a lot of people for this book. I think at one point you mentioned the timeframe being certainly slightly further out than many Silicon Valley folks might be saying these days. I think you put it in a 10 to 50 year open window. Do you think we're going to eventually have to come back and have this conversation about machines of love and grace?
Parmy Olson:
Will we actually reach the rapture for nerds in our lifetimes? I think you can say yes to that question because the definition of AGI is so broad in exactly the same way that the definition of artificial intelligence is so squishy and gray and broad. You can just point to something and say, "That's AI," and everybody just agrees with it because semantics, right? And it's this story that everybody completely has bought into now.
And sure, so Hassabis has said that we're going to have AGI within 10 years. That's it. And I think, yeah, sure, maybe we'll reach a point where some of these large language models combined with reasoning models combined with image generation models combined with robotics, that's the idea of artificial general intelligence. It's not just a narrow domain, but it's a system that can have a general cognitive ability in the same way that our brains do. Sure, we'll probably have something like that within the next 10 years, and some people are going to call it AGI, and then a lot of people are just going to say that it is AGI in much the same way they call the systems we use today, artificial intelligence, even when they probably aren't really artificial intelligence.
Justin Hendrix:
This book's called Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olsen. It's out from St. Martin's Press. Really appreciate you taking the time to talk to me today.
Parmy Olson:
Oh, it was my pleasure. Great conversation. Thank you, Justin.