Home

Donate
Podcast

Taking on the AI Con

Justin Hendrix / Jun 1, 2025

Audio of this conversation is available via your favorite podcast service.

Emily M. Bender and Alex Hanna are the authors of a new book that The Guardian calls “refreshingly sarcastic” and Business Insider calls a “funny and irreverent deconstruction of AI.” They are also occasional contributors to Tech Policy Press. I spoke to them about the new book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, just out from Harper Collins.

What follows is a lightly edited transcript of the discussion.

Emily M. Bender:

I'm Emily M. Bender. I'm a professor of linguistics at the University of Washington and also adjunct faculty in our school of computer science and engineering and our information school.

Alex Hanna:

I'm Alex Hanna, director of research for the Distributed AI Research Institute.

Justin Hendrix:

And you both have to introduce your podcast. You have to say a word about that. That's essentially how I now think of your collaboration.

Emily M. Bender:

It is a big part of our collaboration. We're the co-hosts of Mystery AI Hype Theater 3000.

Alex Hanna:

"A podcast where we seek catharsis in this age of AI hype." That's how we start it every time.

Justin Hendrix:

Absolutely. And if you're a longtime Tech Policy Press podcast listener, you will remember that we talked about the podcast in a prior episode. I can't even remember when that was. It feels like a long time ago.

Emily M. Bender:

We did our first streams in August of 2022, but we turned it into a podcast in spring of '23 and I think that's when we were on the show talking about it.

Alex Hanna:

Yeah. We started a stream before the podcast came.

Justin Hendrix:

And that collaboration has now produced a book, which is what we're going to talk about today. The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. Is it correct to say that this is a product of this podcast come to life in old trees?

Emily M. Bender:

Yes. Or audio in your ears or digital of through your favorite e-reader. But I would say it's an outgrowth of the podcast. The podcast provided a lot of the source material, but it is more than just a product of the podcast. One of the things that was interesting in going from podcast to book is that the frame of our podcast is we react to hype artifacts, so we're always pushing off of something and in the book we have to state the case as a structured thing freestanding.

Justin Hendrix:

You say this is a joyful collaboration between a linguist and a sociologist. You've been at this for a bit. You say the goal is to help the public at large decision makers at all levels become resistant to AI hype. How's that project going? As I talked to you on Thursday, May 22nd, 2025, the morning that a 10-year moratorium on state AI regulation has just passed in the House of Representatives. We'll see what happens in the Senate, but looks like things are moving perhaps in a direction that you would regard as the wrong one.

Alex Hanna:

Yeah. Educating policymakers is one project and that project has been really awful. But I will say the thing that is helpful to see is that educating the public has been a very fruitful project and so we get a lot of comments both on the podcast and on the book of wow, I've been waiting for someone to put together a text or a group of arguments, which I think summarize a lot of what I've been feeling or hearing about things like the impacts to labor, the impacts to the information ecosystem and the impacts to the environment, and now there's a canonical text on many of these things and so that's been helpful even while the legislative dimension has been pretty depressing.

Emily M. Bender:

I guess I would add to that, the thing I've been saying to audiences as we've been on book tour is that at a federal level, things are bleak right now, but if we manage to fight our way back to a functioning government, it's important to know that policy doesn't come out of thin air and the more educated the public, the better position we're in to actually get with sensible policy. So it is in one sense separate projects and another sense actually the same project to educate the public at large and policymakers.

Justin Hendrix:

This book is written in a very accessible form for two academics. I think you've done an admirable job of somehow blending seriousness with your general podcast demeanor, which is perhaps a little more mirthful. Was that a process for you all to, I don't know, get this into a format that is so accessible?

Emily M. Bender:

So I was going to say you're too kind to describe our podcast demeanor as mirthful. It's snarky and I think that being from different fields has been our superpower in coming up with something accessible because we could just check each other and I'm a linguist, Alex is a sociologist and we don't have the same expertise and so we had to make sense to each other. I would say the hardest part for me was that academic urge to cite our sources. It's all there, there's endnotes, but our editor didn't let us put the little numbers in along the way to say there is an endnote to go look at here. So I always like to tell readers if you're wondering what our source is, go to the back of the book. It's there.

Alex Hanna:

Yeah. I think this in terms of writing it, I appreciate the effort to take a text and take things which can get very in the weeds very quickly and try to make it accessible. And I think one of the things that we hope the book does well is that it proffers a set of arguments which I think are academically well-thought-out and puts it in terms or phrase, which I think are useful as ways that people who read the book can really use in their everyday life where they have the power to resist certain intrusions of "AI" into their lives or if they don't have the power find ways to organize against it.

Justin Hendrix:

I want to spend a bit of time talking about that part, about the organizing against it. You talk about labor and you are concerned with automation and a variety of different workplaces, journalism being one, but others as well. Let's just talk a minute about labor power and what you're observing there as you try to ... I don't know. Inform people about the hype and then generally perhaps help push some form of resistance. What do you see on that side of things? Is it substantial enough at the moment the type of resistance or the type of awareness in labor conversations about artificial intelligence?

Alex Hanna:

So we start in the book and we say something like, AI is not going to take your job but it will make your job shittier. And I think that's bearing out in a lot of different places where either what it's doing is that people are ... It's being implemented and what we're seeing is that people either have to babysit these chatbots in different scenarios or they have to work around them or it's off-putting the friction of work into a different job class or people are being laid off and then they're being hired back in gig worker positions. Sometimes the best thing you can do is read GitHub discussions in which there is discussions of AI because it gives you insight into how a lot of developers think about these things. These are the places where Sundar Pichai and Satya Nadella seem to indicate that they're having some of the highest efficiency gains or productivity gains.

And when you get into that, there is this really incredible GitHub issue where Copilot had made this pull request without human intervention. I think all it had done was effectively ... And I'm not going to state this quite correctly .but I think all had done it Basically there was an off by one error and they had done a check for the off by one error rather than addressing the root cause of why there might be an off by one error and the maintainer of this repository ... This is a Microsoft repository by the way. It was for the .net framework and effectively was trying to coax co-pilot into writing unit tests for all these things and log the unit tests in a supplementary. File and then the comments all on it were like, "Wow. This thing is so helpful." And it was where it's wow, you got to do all this extra work to do this.

That's an anecdote, but it's supported by some large scale empirical data. There was a Danish study where they looked at ... I think it was a survey of 25,000 workers and they effectively said that there was no really increases of productivity or if there that had been more increases than there had been new tasks that had been introduced by the tools. So that has to do with the labor argument that features pretty extensively the book. And in terms of the resistance to it, we've seen it in a lot of different corners, there's a lot of attention to it in Californians. For instance, the Berkeley Labor Center and a set of unions in California convened. Statewide convenings specifically on technology and on AI specifically and talked a lot about the kind of resistance to these technologies, ways in which certain unions had been able to have provisions and contracts either governing the usage of generative AI in their workplace or ensuring that there had been ways in which it did not intrude into their workplace at all.

The most well-known of these being the Writers Guild of America and their strike in 2023 that had lasted for 148 days. We've seen other unions have provisions to prevent the intrusion of generative AI in their workplaces, including the members representing unionists who were working for Ziff Davis, which owns outfits like Mashable. Law360 had been another organization that has strong provisions. So SEIU Local 688 that serves public service workers in Pennsylvania has stood up a technology committee and has some contract language around generative AI. So this is becoming an increasingly important part to bargain around. It is of course not the only thing. There's a lot of other things as many other industries are facing austerity, but it is entering into those discussions in many different quarters and many different industries.

Emily M. Bender:

And this is why regulations and laws protecting labor and workers' right to organize are actually a really important kind of AI regulation.

Justin Hendrix:

I want to talk about some of the specifics of this. You talk about a history of especially data workers coming together. You go back to Mechanical Turk, Turkopticon, of course there's now Data Workers Inquiry. These various sort of collectives in places like Africa which are even engaged in litigation. I'm wondering to some extent on that side of things if you feel it's at all substantial enough at the moment or what is the differential between where we're at and where we need to be to substantially pop the hype bubble or make a meaningful impact?

Alex Hanna:

It's a good question. It's a hard question because first off, unionization rates are very low in the US. Amongst the people who have joined the different lever collectivities it's not like there is a huge amount of people. We have now the African Content Moderators Union, we have the Data Laborers Association, we have this new UNI sponsored project which is attempting to be cross-national in nature and so there's a lot of organizing that's happening, but we also recognize that the scale of things like data labor are very extensive and it's very hard to do cross national organizing, especially when the labor conditions and the management labor regimes are so different from country to country. But the interventions I think are helpful so far as there is an ability to stand up against particular companies who are perpetuating terrible working conditions and being able to inspire new organizations to form.

Another good example that I think you mentioned litigation, so there had been some strategic litigation I think in the UK in which there had been ... And I forgot the case exactly. But there had been a case in which there was an individual using I think GDPR as a means to push against certain uses of their data in the workplace and push back against workplace surveillance in particular. And there's been some efforts and some attempts in California to use CCPA to ensure that data rights are respected. Specifically data rights on the job. And the UC Berkeley Labor Center put out a guide in doing that and I know there's been some success in doing that with gig workers in particular.

So there's places in which there is existing privacy legislation which has been able to be a means of resistance of the encroachment of generative AI. There's been the development of labor collectivities, the scale of the onslaught is quite large. I don't know if it necessarily pops the AI bubble, but they're important places of resistance and probably the place of resistance that we've seen some of the most effective control where in locales where regulation has been wholly absent and that includes the US.

Emily M. Bender:

I just want to add there, you asked the question in terms of what would it take for something to be meaningful and I think in many ways the early starts of these things when the numbers are small but people are really opening up the conversation that is hugely meaningful and I think that for example, there's data workers inquiry, there is a really important example of bringing attention to what's going on in the work that is hidden behind the facade of artificial intelligence because it's being marketed so heavily is fully automated when it's always people and so the various initiatives to make that visible are hugely meaningful even if they are small compared to the onslaught.

Justin Hendrix:

You talk about various other issues around the same part of the book. Questions around automation and austerity. The idea of abdicating governance to automation, which seems to be something happening in many different ways across both private sector and the public sector. Of course, it's one of the key concerns I think with DOGE and the automation of different aspects of the US federal government at the moment. That somehow the governance of how services are delivered or government forms its function will essentially be reduced to code and there won't be that messy democratic process that is often ... I don't know. I think what we think of as part of having a humane government on some level. What do you reckon the near future looks like on this stuff. Strikes me we may go all in on austerity, all in on automation of governance and maybe end up with a lot of buyer's remorse.

Emily M. Bender:

Buyer's remorse is an interesting metaphor here because I don't think that very many people felt like they were buying this in terms of what's going on in their federal government. And one of the things that surprised me a little bit about the DOGE turn is that I was expecting the most likely thing to be we're going to sell you this automation as more objective, fairer and so on coming from people who at least seemed like they cared about providing services and that's not what's going on with DOGE. This is like we're going to automate it so I'm not entirely sure why they're even bothering to automate except maybe that Elon Musk is fascinated with artificial intelligence because the goal just seems to destroy the whole system so it's bleak. I do think that it is important for the public at large to learn how to articulate the counterarguments and to call bullshit on these things. But it's an interesting extra layer that it's being done not in the context of someone ostensibly providing services, but in the context of someone deliberately trying not to provide services.

Alex Hanna:

I also find it funny you use the word austerity too. It reminds me of the ... What was it? The Reinhart-Rogoff error in economics where there was the 2020 paper that these two Harvard economists had used and effectively was being used as a means to justify austerity cuts as economic research. It reminds me of that insofar as it's a technological excuse to then perpetuate austerity. Although now in this era there is very little actual scientific research that these austerity measures are going to lead to anything which is a save in time. And so we don't even have the veneer of potential productivity gains in social services. Instead, the hype is really driving the way in which we think that these tools are going to help or there's just a veneer of productivity around it.

This is especially happening in California as Governor Newsom ... So Doge is very much a Trumpian/Musk unholy alliance. In California you have something which is very much an own goal by Newsom and his administration in which he's looking for the incursion of these tools in every area of life, whether it's helping people with their taxes or somehow optimizing traffic or somehow optimizing homelessness bed allocation. And this is something that he said and now one of the things that I think hasn't been reported on a lot is that there's this tool that two LA-based nonprofits, including one that is owned by Rick Caruso who is a billionaire that ran for mayor of LA and then the other one is run by the owner of the Dodgers and Magic Johnson. And they purchased this tool called Archistar.ai, which is supposed to help with expediting permitting and they're looking for that tool to facilitate the rebuilding a real estate after the Eaton and Palisades fire, which is a wild thing that they say we're going to use this tool to expedite permitting so you're sidestepping urban planners, you're sidestepping all the careful work that needs to be done in the LA city and LA county bureaucracy. And I think state leaders and city and county leaders in California have really gone whole hog on that. Yeah. Doge notwithstanding we're finding this happening in states and ostensibly one of the most progressive states in the union.

Justin Hendrix:

Another area I wanted to ask about that you're concerned about in the book is science itself. You say it's squarely in the hype dangers zone. We're seeing the automation of scientific instruments which you say could be reasonable in many cases. But then lots of other types of problems, peer review evidence essentially being manufactured. I think there was a significant paper retracted from MIT this week one that was much quoted around an economic analysis of the impact of generative AI that appeared to be entirely made up, but somewhere between entirely made up and little bits fabricated here and there, whether it's some kind of western blot image and some biology paper through to a citation. It seems like this stuff's creeping in. What do you reckon the, I don't know, near term prospects around this are?

Emily M. Bender:

There's a lot of problems there and it's one of these situations where people are under pressure to publish papers quickly. We're also under pressure to review papers quickly for the fields that are still ostensibly practicing peer review. I have a lot of scorn for the machine learning culture of just throwing things up on the archive preprint server with no intent of ever actually going through peer review. But there are other fields that still ostensibly practice peer review and because everyone is under pressure to publish a lot of things and now they have synthetic text extruding machines that help them make paper shaped objects more quickly, you have more stuff being thrown into the peer review system and that is bad. It is already creaking.

But one of the things that we point out in the book is that what we need to do to shore up these systems is the same thing we always needed to do. We need to make sure that we are resisting and reversing the casualization of the academic labor force so that people actually have time to do careful research and then also have time to do careful peer review for example. And we need to insist on careful peer review and shun venues that don't practice it. Every step in the process because so much of science goes through the exchange of linguistic artifacts, it looks like you can run the synthetic text extruding machines over them and speed something up.

One of the ones that makes me the angriest is when people suggest that you could use a large language model to do a literature review for you. That is to extrude that section of a paper that's got a bunch of citations. But the whole point of science is that is a collective endeavor where you are building on the work of other people, which means reading it and understanding it and critically engaging with it and not just citing it. And I think that some of this has its origins in some actually pretty bad culture within computer science of citations. You'll see it's pretty frequent in the computer science papers that I read that the background section or the related work section is basically just this defensive list of citations saying how this paper is different to those other papers rather than actually talking about how what's gone before opens up this next question and how the question that we're answering in this paper relates to previous work.

Justin Hendrix:

I want to get on to another thing you're concerned with here. The climate catastrophe. The situation we're in where I think everyone's recognizes what's going on now. AI proponents are telling us build the AI as fast as possible. This is the only thing that's going to save us and that of course involves clearing the decks to burn as much coal as possible to fire up as many data centers as possible, to generally build the AI infrastructure. What is this? Is this a last-ditch effort?

Emily M. Bender:

It's magical thinking. It's utter magical thinking. We know what we need to do about the climate crisis. There isn't some mysterious answer lurking out there that if we could just do enough calculations we could get to. It's a political problem and a social problem and you cannot number crunch your way to a solution, let alone spew synthetic text. The idea is that if you just scale these things big enough ... And I'm in the middle of reading Karen Hao's Empire of AI and the part where she's talking about the scaling laws. And people somehow have this idea that if you could just throw more parameters in and more training time and more text, then somehow the racist pile of linear algebra combusts into consciousness and it will be smarter, which isn't like a sensible scale than people and therefore able to solve these problems. There's really no ... there and it's immensely frustrating.

Alex Hanna:

And it's highly tied to the AI for science view. The idea that the mechanism of solving climate change is the idea that if you are somehow able to develop an automated scientist, the automated scientists will figure this out and that's not how science works. That's not how any of this works. That's not how climate science works. There's people that purport that. There's this element of ... There are optimizations that can be done and there's creative optimization problems. Those are LLM technologies in particular that can be done in particular for particular types of climate work and energy work, but there's not going to be a solve in the AI for science or frame. And there's a great paper by Lisa Messeri and M . J. Crockett where they talk about this AI for science frame and what it does and the ways in which it doesn't really understand how science works.

The climate catastrophe is being exacerbated. I told you this Justin, but your great reporting on what's happening in Memphis is really helpful and I direct your listeners to listen to that. I think your reporting and speaking with the journalists and organizers focusing on that has been really helpful. And that's one place. But we also know that data centers are expanding in a lot of different other municipalities around Atlanta and Louisiana. A lot of places where there is lax states regulation or promise that there's going to be lax oversight over many of these data center projects. Places where there isn't a lot of community consultation and really there's not a lot of ability to push back. And so we're seeing this. In addition to the climate crisis there's also the air pollution issue, which is the one that has been very much at the center of what's happening in Memphis. This is also happening where new data centers are being built around Atlanta in Data Center Alley and Loudoun County and also the outskirts of Columbus, Ohio. So we're seeing that as being another huge public health issue.

Justin Hendrix:

I want to spend a minute of time in the final chapter of this book. Do you believe in hope after hype? The climate stuff we just discussed, that probably for me personally, that's the part about all of this that gets me down the most, that we appear to be making this suicide pact with some mythical artificial intelligence. Either it comes along and solves the world or we burn all the resources and make things even worse for humanity in a period where we're already seeing a lot of struggle and strife related to climate. But what are these strategies for popping the AI hype bubble? What do we need to do in order to get to a different place? You talk about collective action and comedy, clearly that's what you're trying to encourage with the book, with the podcast, but what other strategies do you recommend that folks who are concerned about these issues follow?

Emily M. Bender:

So there's a series of questions that I encourage people to ask whenever we are faced with a piece of technology. And it might be a situation where we personally are deciding whether or not to use something, we might be in a decision-making position where we are actually deciding on the use of technology for a broader group of people, or we might be in a position of an activist trying to push back on some technology that's being applied. And the first step is to not call it artificial intelligence because that just muddies the water, but instead think about it in terms of automation. And then we can ask, okay, what's being automated? What's the input? What's the output? Can you sensibly get to the purported output from the inputs? So if the input is an audio recording and the output is a transcription, this technology might not work perfectly and in fact, we don't have perfect automatic transcription, but it is a sensible idea that if you've got a recording of someone speaking, then there's enough information in that input to get to the transcription in the output.

Contrast that with someone who claims ... And this has been claimed that they can identify whether or not someone is a criminal based on a picture of their face. That is just on the face of it, obviously not a possible thing because the category of criminal is a social category and not something that's inherent in a person, let alone visible on their face. So you can ask this input-output thing. But then if you've passed that test, you can say, okay, how was this built? Whose labor went into the training data and other aspects of it? And is that training data actually representative of the kinds of situations where I expect it to be used? And very importantly, how was it evaluated and does that evaluation match my use case? And then beyond that, you can ask questions about, okay, why are we automating this? Who's benefiting from it? Who's possibly being harmed? What recourse do people have if they're harmed? These are questions that anybody can ask. You don't have to know how the system works. And if the answer is unavailable, then that is a strong reason not to say yes to these systems.

Alex Hanna:

You already mentioned, we talked a lot about organizing and a lot of the efforts in that space. And I think the thing is there's also kind of elements of being consumer pushback against this. I think that one thing that we also don't really talk about in the book, but I am having a lot more conversations about, and this might be of interest to your listeners, is thinking about ways in which there are areas of one's own profession that's helpful to push back against in terms of practice. And so one thing as an educator that I've been having a lot of conversations about is how do we really ensure that there's a world in the classroom, that doesn't feature these tools so predominantly. And so last week there was this New York Magazine intelligence thing where it was all about the worry and the intrusion of AI in the classroom and the way that cheating has massively ... I don't want to necessarily call it cheating. I want to call it the use of LLMs in doing homework. Because I think the frame of cheating then does this other thing where it puts the onus on the student who is under often a lot of immense amounts of pressure. They have peer responsibilities, they have work responsibilities, they have 18 other classes and to meet these expectations and they're like, "I have an essay. I'm going to have ChatGPT do it," and whatnot.

And so there's been a lot of interest from academics and instructors to think about how can we think about a classroom that really reinvents it? It thinks of a way to really enforce those critical thinking skills and those aspects of learning that we want to reinscribe but cannot be reduced to necessarily the text. And Emily has a great line where she says, "We don't write student essays because we need to keep the number of student essays in the world topped up. Student essays are written as a means of creating thought processes and teaching a particular set of concepts." And so what are the kinds of ways of resistance that require changing practice? Unfortunately, that means it's often the types of practices which are not the most efficient ones, but they're certainly the ones that take the most thought. And so I think however, one does that rejiggering one's own process to avoid the efficiency that is somehow granted to you by LLMs, that can also be a way of pushing back thinking about that in one's own work.

Emily M. Bender:

And I think if efficiency is the goal, I think we should be questioning why that's the goal. Coming back to this thing, the way I say it is we don't assign student essays because we need to keep the world's supply of student essays topped up. Writing and thinking is an inherently inefficient process, but efficiency is not the point. If efficiency were the point, then the goal would be words on a page, but the point is the experience and the thinking and the refining of your thoughts and that takes time and effort and that time and effort is worth it.

Justin Hendrix:

The podcast has been a resounding success. The book is out. What's next for the two of you?

Emily M. Bender:

The podcast isn't finished. We're still going with that with no plans to end. And I think also I'm curious to see where this conversation goes. And the podcast, you say it's a resounding success. That's nice to hear. It's not got a huge listenership yet, but that can always change over time. But I think the book is going to bring this conversation into interesting new spaces and I'm excited to see where that goes.

Alex Hanna:

For the two of us, I think there's a lot more hype to continue to address and ridicule. That's important to do. And I think a lot of what we're doing now is trying to find different ways of getting these set of ideas to different people. I think it's helpful to think about creative outputs. We're talking about doing some zines, doing some things of that nature. In particular, thinking about next in terms of what we're doing individually, I'm very interested in labor and this question of worker resistance, and that's what I'm thinking a lot and I think that is a fuse through the book and I think that work goes hand-in-hand with the effort of what the book is doing.

Justin Hendrix:

Sign me up to help with the zine. I haven't done one of those since high school, but sounds like a lot of fun. I do recommend to my readers to go and pick up a copy of The AI Con: How to Fight Big Tech's Hype and Create the Future We Want by Emily M. Bender and Alex Hanna. Emily and Alex, thank you so much for coming on this podcast again.

Emily M. Bender:

Thank you Justin.

Alex Hanna:

Thanks for having us on again.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Related

Podcast
Decolonizing the Future: Karen Hao on Resisting the Empire of AIMay 23, 2025

Topics