Home

Donate
Podcast

Decolonizing the Future: Karen Hao on Resisting the Empire of AI

Justin Hendrix / May 23, 2025

Audio of this conversation is available via your favorite podcast service.

In his New York Times review, Columbia Law School professor and former White House official Tim Wu calls journalist Karen Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, “a corrective to tech journalism that rarely leaves Silicon Valley.”

Hao has appeared on this podcast before, to help us understand how the business model of social media platforms incentivizes the deterioration of information ecosystems, the series of events around OpenAI CEO Sam Altman’s abrupt firing in 2023, and the furor around the launch of DeepSeek last year. This week, I spoke with Hao about the book, and what she imagines for the future.

Penguin Press, May 2025.

Karen Hao:

My name is Karen Hao and I'm the author of Empire of AI.

Justin Hendrix:

Karen, I'm so excited to speak to you about this book. It is quite a tome. You say it is based on 300 interviews with around 260 people, correspondence, documents, seven years of reporting at MIT Technology Review, The Wall Street Journal, The Atlantic. You spoke to 90 current and former open AI executives, 40 current and former executives and employees at Microsoft, Anthropic, Meta, Google DeepMind, and Scale AI. This was a piece of work. This is an accomplishment that goes beyond what most people would do in their career.

Karen Hao:

It definitely feels like the culmination of a lot of all of my work, I should say. It feels like everything that I've ever reported on up until this moment and even pre my journalism career, everything that I experienced when I was working in tech, I feel like I poured all of that into the book. So yeah, it does feel like an accomplishment. I do feel proud of the work that I did to put that together.

Justin Hendrix:

Well, I'm catching you on the beginning of your American book tour. You write at the outset, "This book is not a corporate book. While it tells the inside story of OpenAI, that story is meant to be a prism through which to see far beyond this one company. It is a profile of a scientific ambition turned into an aggressive, ideological, money fueled quest, an examination of its multifaceted and expansive footprint, a meditation on power." When you first set out to write this book, did you know that's what it would become?

Karen Hao:

Absolutely. I mean, it's funny, when I first set out to write the book, I actually didn't know it would focus so much on OpenAI. I really wanted to first tell the story of how the AI industry was increasingly becoming a new form of empire. I realized as I was mapping out how I would tell this story and make it feel real and concrete and have some kind of backbone to guide readers through this epic, decades-long journey that has culminated in this frenzy that we see in the last few years, I had to do it through OpenAI.

So I changed my plan for the book and was like, okay, in addition to reporting out the history and the impacts all around the world that I want people to see to really concretize what I mean when I say empire, I also need to do a lot of insider reporting. To just map out what decisions were people making within this company, that then had ripple effects that shaped the AI industry and the rest of the world.

Justin Hendrix:

So you say that this word empire is the metaphor through which you've come to understand AI in the AI industry, OpenAI, you talk of colonialism and exploitation. In the chapter Dreams of Modernity you say you, quote, "Realize that the very revolution promising to bring everyone a better future was instead for people on the margins of society reviving the darkest remnants of the past." I guess when you think about how you got to that point, when did that begin to dawn on you that was the story you were writing?

Karen Hao:

This goes back to reporting that I did years ago for MIT Technology Review starting in 2019 when I started encountering lots of different examples of companies engage in a very particular colonial or imperial mindset. A lot of companies from the global north would go to countries in the global south to either exploit labor.

It was hidden away from their users and they could continue to perpetuate the narrative that AI was magic. Or go to the global south to collect data because these are the environments that have less data privacy protections. There's just generally if a company is found wrong, be doing something wrong there are just less ramifications.

As I was trying to figure out, wait a minute, this is definitely a pattern. I was looking at how facial recognition companies were swarming into South Africa for example, and using that opportunity to gather a lot of Black face data. At the time the industry was getting heavily criticized for the fact that their technologies were less accurate on dark-skinned individuals.

That process in and of itself was then leading to what scholars in South Africa were calling a digital apartheid. A recreation of apartheid where there was a patrolling of Black bodies that felt exactly the same way that the apartheid system also limited the freedom of movement for Black people.

So those were the types of examples I kept seeing. I was like, this builds up to a bigger picture. It's not just one-off stories. There's a broader, global system of power that is causing these stories to happen again and again.

So I ended up writing a series called AI Colonialism that ended up publishing in 2021. That was the first time that I concretized this thesis that a lot of the profit-driven incentives of the AI industry and the exploitative nature and the very consumptive nature of AI technologies as they were being developed largely by Silicon Valley, were perpetuating all of these colonial dynamics, imperial dynamics that we still live in the legacy of today.

Justin Hendrix:

You say OpenAI is now leading our acceleration towards this modern-day colonial world order. When you think about that diagnosis, when you think about framing OpenAI, I suppose as a leader in this, what are the key things that you would put at their door?

Karen Hao:

Yeah, I think maybe the best way to answer this question is to flesh out a little bit more of how my argument around empire has started to evolve since reporting this book. So originally I pegged empire to the AI industry. In the book I peg it to actually, I call OpenAI an empire and each company their own empire.

Which I think is a bit of a important distinctional shift in that one of the key features of empire building back in the day was creating this idea that you are morally superior than other empires. So you need to be an empire because there are other bad empires out there that are going to bring the world to a terrible place. So you must build yourself up and be strong and fortify yourself so that you can outcompete them.

This was like the British Empire would always say that they were better than the Dutch Empire. The French Empire would say they were better than the British Empire. That was a core justification for why they were plundering resources and exploiting labor all around the world. They also had this very inherent belief that they were doing all this undercivilizing mission. They were truly bringing benefit to everyone and giving them the opportunity to actually ascend to heaven instead of go to hell.

That is essentially the dynamic that we see now within the AI industry where all of these companies that I define as empires of AI, they're all trying to compete with each other with this kind of rhetoric. They alone are the ones with the scientific and moral clarity to bring people to heaven or else risk sending everyone to hell. OpenAI really was the first company to essentially put a marker in the sand and turn it into a race.

So before DeepMind had been using the rhetoric of building artificial general intelligence as a motivation for why they existed, but they didn't approach this as a race. They didn't approach the quest for AGI as something that required an enormous amount of scale and enormous amount of resources.

It was physically OpenAI that fashioned itself as an anti-DeepMind, an anti-Google. We're going to be a nonprofit, we're going to be the good guys that bring this technology to everyone. In order to do that, we're going to blow up these models, blow up the amount of data that we use to train them, blow up the amount of data centers and supercomputers that we need to train them and do it aggressively such that we continue to stay in the leading position.

So that is what kicked off an entire industry-wide race where every single company, whether old ones like Google, Meta have entered this race or new ones have formed like Anthropic, Perplexity that have all started rushing in this direction. This escalation of the size, the scale and the resource consumption is what I highlight in the book is truly the thing that is bringing us towards catastrophe.

We are returning to an age where the majority of the world now doesn't necessarily have rights anymore. It is these companies and the people at the top that are deciding what goes and what doesn't go, what data they can take and who gets to have privacy versus who doesn't. Who gets to have economic opportunities and who doesn't. Everyone else lives in the thrash of their decisions and their competition.

Justin Hendrix:

Of late, OpenAI has really put its shoulder into going out and trying to build connections with foreign governments. Of course it's wrapped itself, I think in the American flag. We should talk a little bit about that, the extent to which this is such an American project in many ways. Just in the last week we've seen Sam Altman over in the Middle East along with Trump and other AI executives meeting with folks in Saudi and other places around assembling a vision of artificial intelligence.

We've seen OpenAI release this, I don't know what to quite call it, a product or product line or a service proposition around quote, unquote, democratic AI. This all-encompassing proposal really around we're going to do everything for you. We're going to build AI for your government, we're going to help you establish a startup ecosystem around AI. We're going to bring capital. It seems like at this level, OpenAI really is empire building or trying to insert itself into nations around the world.

Karen Hao:

Yeah. I loved your piece by the way, talking about how OpenAI was wrapping itself around the American flag. I very much think they are. That is exactly right. They're using this as a tactic to ferry themselves out further and further and get into higher and higher echelons of power, to entrench their relevance and entrench their infrastructure and their approach in physical data centers that you cannot unwind them once they're built.

There's this realization that I hit upon while I was reporting that I don't explicitly call out in the book, but there is an undertone throughout the book that the US is also an empire. I mean, scholars have long made this argument. Now it feels it's really hard to argue against with what's happening with the government now and the rhetoric that they now use to talk about taking over Greenland, taking over Canada and things like that.

We really are seeing the alliance of Silicon Valley empires that are corporate-based empires and the government as a state-based empire, both trying to use one another to fortify their empire. This has an historical analog and that the British East India company was a company that engaged in economic activity and generally mutually economic activity around the world first in India.

Over time, through capitalism, through that economic activity, they gained more and more economic leverage in India. That they then gained more and more political leverage in India, that they then suddenly reached a point where they were able to act completely in their self-interest and do whatever they wanted without any consequence to them, only consequence to other people.

Then they transformed from a company into an imperial power. They did that all with full backing of the British crown, of the state, of the state-based empire. Eventually the state then benefits by nationalizing the British East India company, and that's when India officially becomes a colony of the British Empire.

Before then, there was a couple hundred years in which it's actually a company that is ruling the subcontinent of India. So I think we're literally seeing this play out again where we first have companies that are trying to extend and create these economic, financial partnerships. Like, "Hey, you rub my back, I'll rub yours. You invest some money in my data centers, I'll invest some money in your data centers."

Then as their tentacles reach further and further, the US government is also making its maneuvers to ride on that wave and expand its empire as well. So once you see that picture happening, I mean it is extraordinarily alarming because we are at a point now where the supposed leader of democracy, of the democratic world order has decided now, that's no longer the game to play. This quest for AGI is oiling the wheels for this to happen faster and faster.

Justin Hendrix:

One of the people of those many dozens that you did speak to that would not speak to you was Sam Altman, the CEO of course, and founder of OpenAI. I want to talk just a little bit about his unique personality and the fact that it's him leading this charge. You talk at one point about the fact that so many around him are unnerved by his "self-serving pursuit of power and his compulsive dishonesty."

He's also an extraordinary operator and just this incredible ability to really move in those halls of power. You go back in time, you talk a little bit about how he laid out a vision even early on that he wanted to be close to policymakers, to presidents. That he wanted to be a source of, quote, answers when they need to make big decisions. He's setting himself up very much as this kind of person who is shoulder to shoulder with the world's leaders and has the answers to their problems.

Karen Hao:

Yeah, absolutely. I mean, he is a once in a generation storytelling talent. He's an incredibly strategic mind. He plays the long game. So he is able to do all these things also because he has a very good grasp of what people want.

So when he gets into a room with someone, he's able to negotiate incredible deals with them. He will portray some kind of vision of the future that they want to be a part of based on what they need to hear, to then suddenly put down lots of money or put down their authority or whatever it is to be part of that future, to have a piece of that future.

As you alluded to, he also has a loose relationship with the truth, which is part of why he's such a powerful storyteller. He's able to spin stories irrespective of whether they have much bearing on reality. That is what has allowed him great success and also has led to just a trail of controversy in his wake. Throughout the book, I explore how different people slowly come to the conclusion that he is manipulating them.

What I realized over time talking with so many people, some of whom love him, some of whom hate him, is that if you believe or if you align with his view of the world and his vision, what he's trying to accomplish in that moment, he is the greatest asset ever to have in your corner. He will persuade whoever needs to be persuaded, he will rally whatever capital needs to be rallied to do that thing.

If you disagree with him and his worldview, then he's the greatest threat ever because now you are up against one of the most powerful narrators of the future ever. You have to somehow convince everyone else not to fall into that trap. So all of those skills, I think particularly position him well to entrench relationships with people in politics because politicians also inherently run on stories. He's very attuned to the, you rub my back and I rub yours dynamic of politics as well.

So that's part of the reason why he's been able to rise higher and higher. Also, part of the reason why many employees within OpenAI or other people in his orbit have a huge loyalty to him because they think if they attach themselves to him, they will also be richly rewarded.

Justin Hendrix:

The book is replete with your interviews and interactions with other key OpenAI executives, Greg Brockman, Ilya Sutskever, many others. I want to maybe step away from the commanding heights and maybe down to a little closer to the earth.

Some of the other people you talk to for the book, people like Mophat Okinyi in Kenya, a person who worked for the outsource firm Sama. He was at one point working with a quota of looking at 15,000 pieces of content a month on the sexual content team. Essentially reviewing material that was coming out of OpenAI's products and helping to annotate it and classify it presumably so that OpenAI could prevent that type of material from emerging from its models.

What did you learn talking to Mophat and other people like him who were literally doing the in-the-trenches work of making these models that Sam Altman is selling at such a high level, making them work?

Karen Hao:

Philosophically, empire is about hierarchy. It's about this belief that there is a group of people who are superior either because it's their God-given right, or it's their nature-given right. Therefore, they get to rule over people that are inferior. If you want to talk about that aspect of empire, going to talk with people like Mophat during the course of my reporting just hammered home how deeply hierarchical this world order that these companies have created is.

Mophat's experience, he was working for this third party, as you said, Sama. Sama gets a call or an email from OpenAI at some point saying, "We want to give you this contract to build a content moderation filter for our models." At the time, OpenAI was just starting to think about creating not just research models but consumer products off of their research models. They thought if we're going to have a text generation machine that can talk about anything, we really shouldn't be putting that into the hands of consumers and having it spew all kinds of toxic hate speech and all the bad things on the internet that it was ultimately being trained on.

So the best way to make this into a consumer success or a business success rides on the ability to contain the toxicity of the model. So they wanted to create this content moderation filter. They end up finding the outsourcing firm. Then Sama brings in all of these Kenyan workers like Mophat, like Alex Cairo, who I don't write about in the book, but was another worker that I interviewed to do this work. What they were doing was day in and day out reading some of the most horrific text that you can find on the internet, and also some of the most horrific text that could be dreamed up by AI models themselves.

OpenAI was prompting AI models to come up with all of these different variations of horrific scenarios to give to the workers so that they could have a breadth of examples of everything that they wanted to filter out of the systems. So those workers were then reading that and then putting it into a taxonomy of is this violent content or sexual content? Is this graphically violent content? Is this sexual content abusive? Does that abuse involve children?

So it requires them to read the whole thing in detail and catalog everything. Just like content moderators in the social media era, it completely broke them. It broke them, it broke the people who depended on them. So in the book, Mophat had a family, he had a wife, he had a stepdaughter. As he completely started to lose his sense of self from doing this work, his wife left him and took the stepdaughter with her.

There is no clear reason why a company like OpenAI categorizes their researchers as doing the real work being paid million-dollar compensation packages, and why they categorize someone like Mophat as doing work that is only worth $2 an hour. There's actually no... It's a purely ideological subjective reason why you would categorize it that way.

So essentially what these AI companies do is they have this deep-rooted hierarchy about their superiority and others inferiority. This idea, when I first wrote this piece for the front page of The Wall Street Journal, and OpenAI did engage with it, the piece at the time and gave me back a comment saying, "This is what is necessary to achieve our mission of being beneficial for humanity."

I was like, okay, so now the question is who do you define as having humanity? So that was one of the reasons why I ended up highlighting these stories in the book of people that were feeling the brunt of the current AI development paradigm far and away from Silicon Valley, because that is when you really start to see the logic of the empire.

Justin Hendrix:

There's stories of that human impact. Another impact you get into is the environmental one. This takes you to a lot of places, but you spend time in Chile. You talk to activists there, including Tanya Rodriguez, others working on issues around water and water resources, but also rare earths and other types of minerals and metals that are necessary for AI development, AI infrastructure. What did you learn in Chile?

Karen Hao:

There's these massive pieces of computing infrastructure that are going up all around the world to support the scaling paradigm of AI development that Silicon Valley has captured the public's imagination with for what AI can be. That computing infrastructure, it hits different depending on what community it's in.

Like in the US, most data centers go into rural communities. Those rural communities have a lot less economic power or political power than these companies by far and have a really hard time pushing back or even knowing in the first place that these data centers are coming. They have no ability to assert and push back and ask questions or get any transparency on these pieces of infrastructure. It gets even worse when you're talking about a rural community in a global south country that has itself far less political and economic power than the US.

So when I went to Chile, what I learned was the extent to which these companies can really become exploitative and extractive to a new level. So I was speaking with activists who had been fighting for years against this Google Data Center project. The reason they initially started fighting was because they had long been water activists and environmental activists.

They discovered that this Google data center in the middle of a historic drought in Chile were going to come into their community to take one of the only sources of public water in the entire country. Chile has a really interesting history, that it was under dictatorship for a long time, and during that dictatorship, almost all things public were privatized, including water. There was this one community that did happen to have a public water source that served the residents of that community and in emergency situations also served water to the rest of the country.

That's where Google wanted to add its data center and then tap into that freshwater resource to cool its data centers. The amount of water that they were proposing was to consume more than a thousand times the amount of water that residents in that community would typically consume. They went through, they fought tooth and nail just to get Google's attention because they not only had to make enough noise to pressure Google Chile, they had to make enough noise to then get that all the way back to headquarters in Mountain View.

One of the things that I describe is how Google's headquarters then sends people to Chile to meet with them to try and quiet the resistance. They send people who only speak English, not Spanish. So there's so many power dynamics at play here where just to try and protect a life sustaining resource in their city, they have to go through an extraordinary number of hurdles to even get Google to come to the table so that they can have a conversation. Then it turns out they can't because there's this language barrier. So yeah, that was one thing that I learned.

The other thing that I learned that was really amazing was the fact that these activists, you would think that they would have every reason to feel a complete lack of agency, and yet they have so much agency. They are continuing to fight even years later. They're continuing to mobilize the community.

They go out and hand out pamphlets and flyers. They have community meetings. They're really working hard and all of this under volunteer. It was a really beautiful moment where I was like, oh yeah, this is how democracy actually works. This is how hard people need to fight all around the world to continue exercising their democratic rights and making sure that it's not taken away from them.

Justin Hendrix:

I know you and I both share a fascination with these data center fights, how they play out in local communities, and the extent to which people are expressing various concerns about the future of their community, the future of artificial intelligence, the future of the economy, the impacts on the environment.

I want to ask you a little bit, in this country, in the United States right now, we're seeing a kind of about face on the idea that we should do really anything other than help these companies advance. A lot of that seems based on various promises that the executives make. You point out one in particular, this idea that we have to develop artificial general intelligence, advanced AI in order to resolve climate change before AI data centers and all the energy necessary actually exacerbates it. So that seems to be one of the key narratives.

Another one is around supremacy with regard to China. We've got to beat the Chinese. Eric Schmitt just the other day at a house hearing raised the frightening specter that everyone in a few years would have an Einstein in their pocket, but what if that Einstein spoke Chinese. Which I found to be quite a thing to say. What do you make of the about face, the turn in American politics? Even this possible moratorium that's being considered in Congress at the moment on state AI regulation. They really appear to be ready to simply, I don't know, clear the decks.

Karen Hao:

I really think it goes back to the US government now having its own imperial ambitions. They see these companies as the ones that they can send out into the world, the missionaries that they can send out into the world to help them establish these relations that continue to build the US empire. So that's why they are saying, don't get in these companies way. Don't, because this is our state asset to continue our expansion.

Yeah. The fact that, I think we've talked about this in previous episodes too, this what about China card that Silicon Valley has just been able to pull out so effectively again and again and again, if you look at the track record of what this argument has gotten us, it has just gotten us more authoritarianism in the world. These companies are techno authoritarians. Instead of actually having sensible legislation and regulation that allowed them to actually be providers of more democratic technology platforms and services, it's actually just enabled them to be a completely authoritarian in their own right.

The world now has no example of any technology that has been developed in a more democratic mindset, embodying democratic values. To go back to what you were saying with OpenAI for countries and them saying in their blog post, we want to build democratic AI all around the world. What democratic AI? I have not seen any evidence down to the way that these companies enter into communities and completely hijack the democratic process all the way up to the way that Silicon Valley is now hijacking the federal democratic process. Where is the democratic part of this enterprise?

Justin Hendrix:

A lot of cognitive dissonance between these terms and what's actually happening in the world, between democracy and abundance. That's another term that you bring up. I think even in your dedication, you write that the book is to the movements around the world who refuse dispossession in the name of abundance.

I wanted to ask you about that word. There's another exciting book out that lots of people are reading that deals with this word abundance. There are others who use it, maybe coming from even different political events. This is the goal. This is Sam Altman's word, very much. What does this word even mean?

Karen Hao:

It's a great question. What does it mean? There is a vagueness about it, a purposeful vagueness about it I think in the way that Altman uses it and used it in reference to the way that the tech industry evokes this hand wavy idea that there will be more for everyone.

Altman had this blog called the Intelligence Age, either earlier this year or late last year, where he said, we are now entering the intelligence age where the things that we will see enabled by AGI will be so extraordinary that we cannot even imagine it now. It will bring us so much abundance and prosperity that we can't even describe it in language that we have today.

That was basically what I was referencing in the dedication was, do not let this rhetoric about we will bring you untold riches to facilitate the exact opposite. To take away your economic opportunity, to take away your resources, your natural resources within your country. To take away your water, to take away your future and your ability to self-determine in that future. So yeah, that was ultimately what I meant.

Justin Hendrix:

There are so many different things that are happening in the real world at the real moment that we're in that I want to ask you about. I guess one last one I'll do before we wrap up here. One of the things I'm really struck by right now as well that you do get to in the book a bit is the fact that so much of what we would need scientifically to know whether artificial intelligence is quote-unquote democratic or just or simply not wretchedly biased, to do any of the safety testing necessary, the type of transparency, the type of science that's necessary.

We seem to be closing that down in the US. Which seems to be another thing that the government is doing, which is in many ways useful to the industry, but may ultimately, I don't know, shoot us in the foot. So much of the strength of American innovation in the past has come out of places like the NSF and the work of entities like NIST. Are you following these developments? What do you make of them in the context of what you've written here?

Karen Hao:

Yeah, I think they're all evidence that a lot of the democratic processes, institutions that we previously thought were actually quite strong and would last a long time are being dismantled and crumbling. They get in the way of what the US government now wants to do. That is very alarming in its own right. I think it also points to, people used to say, what's the solution?

I'd always be like, "Oh, regulation at the federal level," and all of this activity has made clear that is not the answer, that we cannot rely on any kind of top-down leadership. There is a crisis of leadership at the top, but that doesn't mean that we do not have leadership at the bottom. One of the beautiful things of democracy is the fact that you can lead from the bottom.

I think as we think about what to do next, how to contain the empires, I like to say we really need to start being creative about building coalitions, building movements from the bottom up to apply pressure and demand changes from the way that these companies operate and ultimately the way that our elected officials operate. We can't waste time. It's not a lot. There's so much work that we can do even when there is no leadership at the top.

Just in the same way that Trump himself bided his time for four years, then came out the gate with just an extraordinary acceleration of activity to dismantle everything. This is the moment when we need to do the work that not only starts to create pressure already on these empires of AI, but also sets us up such that when there is leadership at the top again, there is a foundation for them to act.

Justin Hendrix:

You in this book with organizers, with people working from the ground, with Maori people, data workers in Africa, you call for a redistribution of power. Transparency is part of that, but I assume that's what you're getting at here. This need to take power from those who are in the leadership of firms like an OpenAI or in perhaps governments like the United States and put more power in the hands of those individuals. Are there any other more concrete ways that you can think we get to that world?

Karen Hao:

I started thinking about the AI supply chain, like data, computational resources, labor, models applications. Each of these different stages or each of these inputs, these ingredients that go into AI, those are sites of democratic contestation where Silicon Valley has done a really good job of creating norms in which people, the average person now feels like Silicon Valley owns these resources. They own the data, they own the land for the data centers, but actually everyone else does.

We own our own data. We own the land and the energy and the water that they want to use. We own our own labor. To me, the concrete things that we need to do to redistribute power is, for example, by resisting the amount of data that we just willingly give up to them without any kind of compensation or benefit back to us. We are already seeing lots of examples of that.

Artists and writers that are suing these companies. The Hollywood writers that organized and mobilized one of the longest strikes in history to assert their workers' rights. The artists that are also glazing their portfolios before they put them online so that when AI models train on their work, they start to degrade.

We are seeing just as in Chile with those activists resisting the data center development by Google, we are seeing that all around the world. With all different kinds of people that are rising up to protect their communities, and demands these companies actually give them something that is actually beneficial to their communities in return for hosting a data center there. I think I don't have all of the specific recommendations for what everyone needs to be doing at each part, in each of these sites of contestation.

When you think about what you can do, this is the framework to think about, what are all of the different levers that you can grab, all of the ingredients that these companies need, all of the industries in which they're deploying. How can you actually push back and say, "No, we don't want your vision. Here's the vision that we want as a public, as our community." Ultimately, if there's enough of that collective pushback and collective articulation of what we want for that future, I think that is the best hope that we have to move to a better world in the future.

Justin Hendrix:

This book is Empire of AI, Dreams and Nightmares. Karen, is there a dream that you can share with us? Is there a future that you imagine? Are there contours to that future you can describe? If we are able to make some of the changes and reforms and perhaps come to some of the same conclusions that you've come to in this book, what does the future look like for you?

Karen Hao:

My dream is that everyone is able to live a dignified life. That means they have opportunity to access education, opportunity to access affordable healthcare, opportunity to do fulfilling work and be compensated for it. Ultimately, there are many ways that AI can fit into that picture and help bring us to that future, but certainly not the version of AI that Silicon Valley sells to us.

Not the version that requires us to capitulate all of these rights towards some kind of ambiguous future. So my dream is that we remember what we truly need as people to live in a sustainable, equitable, functioning, healthy society, and then we figure out how to build technology to assist that.

Justin Hendrix:

If you want to learn some more ways about perhaps how to think about this problem, about how to advance some of the ideas that Karen's just addressed, perhaps you should pick up Empire of AI Dreams and Nightmares in Sam Altman's OpenAI, which is on sale now. Karen, thank you so much.

Karen Hao:

Thank you so much, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Related

Podcast: Resisting AI and the Consolidation of PowerMay 5, 2024
Perspective
OpenAI Is Wrapping Itself in the American Flag to Sell "Democratic AI"May 12, 2025
Transcript
Transcript: Sam Altman Testifies At US Senate Hearing On AI CompetitivenessMay 9, 2025
Podcast
Adam Becker Takes Aim at Silicon Valley NonsenseApril 27, 2025

Topics