Your Guides Through the Hellscape of AI Hype
Justin Hendrix / Jul 2, 2023Audio of this conversation is available via your favorite podcast service.
Alex Hanna, the director of research at the Distributed AI Research Institute and Emily M. Bender, a professor of linguistics at the University of Washington, are the hosts of Mystery AI Hype Theater 3000, a show that seeks to "break down the AI hype, separate fact from fiction, and science from bloviation."
I spoke to Alex and Emily about the show's origins, and what they hope will come of the effort to scrutinize statements about the potential of AI that are often fantastical.
Below is a lightly edited transcript of the discussion.
Justin Hendrix:
I'm very grateful to the two of you found time to talk to me today because you really truly have been at the center of a tempest over the last six months. It seems like since the release of ChatGPT, both of you have been in media, at conferences, just perpetually in demand as commentators on our new robot overlords and the hype around it. How does it feel being at the center of all of this attention?
Emily M. Bender:
It's a little exhausting. It's also an interesting shift, because it's not like there wasn't AI hype before. We've been doing our work on AI hype, but the podcast that's coming up, we started that in August of last year, but we've been working together on paper since before that. A lot of it was within the research community pushing back on AI hype. The big change with the release of ChatGPT, was that all of a sudden, it seemed like the broad public was involved in the conversation. A lot of it was actually saying the same things we've been saying, but saying it to new audiences, which is a great opportunity, but also, the world could do with less AI hype.
Alex Hanna:
Totally. I think that even though we've been in the mix of all these conversations, it's a time waste I think in some ways. Because there's just so much hype to counteract and there's so many people that are interested in getting in on the gold rush of generative AI right now, and they're just promising so much. There's a necessity to counteract that, but there's also a research program we both independently had prior to that, that had to do with a lot of similar things around Emily's work around natural language understanding and other interpretations and misinterpretations of natural language. My own research program around data and the institutions that are producing data and the labor around data. It's become a component of what we do. Now, we have to just find different venues and different strategies of tech against all the hype.
Emily M. Bender:
One journalist I talked to pointed out that nobody gets paid to combat AI hype, but lots of people are getting paid to put it out.
Alex Hanna:
100%.
Emily M. Bender:
It's really uneven playing field.
Justin Hendrix:
Well, we're going to talk a little bit about one particular project that two of you have to pierce the veil or pierce the bubble of AI hype, which is this podcast, which has a comedic intent in it. Perhaps it's maybe a therapeutic effort for the two of you. I want to ask you a little bit more about your research agendas and why you're doing this work. Maybe Emily, we'll start with you.
Emily M. Bender:
I work in computational linguistics. I'm a linguist and I've got one research strand that has to do with multilingual grammar engineering and computational semantics. That's all pretty far from this conversation, except that it informs my understanding of what language is and what the understanding of language is. Then since early 2017, late 2016, I've been working on the societal impacts of language technology. In that space, there's a lot to be said about how things like large language models pick up bias and what the implications of that are. Also, I found these two things merging together in the misconceptions of what language models are doing and the way they're hyped in the world. There are societal impacts, negative societal impacts of saying that these are everything machines that understand and can think. That's been a focus of my research as well, where these things meet up. You're absolutely right, that our podcast is therapeutic and irreverent, and we'll talk more about that.
Justin Hendrix:
Alex, how about you?
Alex Hanna:
I would say the strand that has gone through the interaction of technology in society. My very early work has been looking at the impact of social media on politics, social media, on social movements. Then I think also around the same time in 2017, it's been more of a science and technology studies understanding of machine learning and AI. My work since then has looked a lot more at data sets that are used in the construction of AI. That's where Emily and I connected. I think we were joking on Twitter about something, I forget exactly why. It ended up producing two papers. One AI and the, oh gosh.
Emily M. Bender:
AI and Everything in the Whole Wide World Benchmark.
Alex Hanna:
Thank you. We just call it the Grover Paper because it has a really great anecdote with Grover from Sesame Street, where he goes into this museum and it's The Everything Museum. Then he gets through this museum and then there's a door labeled everything else and it's the real world. Then we have another paper which is Data and its (dis)contents. It's a survey paper on a bunch of the uses and criticisms of data used in machine learning. The current project that I'm working on pursuing is, I guess the short way of describing it is a social history of ImageNet but not told through the progenitors or the curators and researchers, but told from the workers who labored on it and also some of the data subjects. I don't want to say too much about it because it's kind of a kernel in my eye and I don't want to get out in front of it. I'm willing to mix my metaphors. It's my privilege as a second-generation immigrant to mess up metaphors.
Emily M. Bender:
What you say about those two papers, is that it wasn't just me and Alex. The Grover Paper, AI and Everything in the Whole Wide World Benchmark, the lead author there is Deb Raji. Then on Data and its (Dis)contents, the lead author is Amandalynne Paullada and Emily Denton is also an author on those papers. It was that group of people. I think the other four were working together already. Then when everyone was just all online in 2020 on the strength of those Twitter interactions, I got looped in to the weekly Zoom meetings and that's where those papers came from.
Justin Hendrix:
I want to talk a little bit about the genesis of this podcast. Because you've taken essentially the idea of Mystery Science Theater 3000, which if folks don't remember, I am of course old enough to remember Mystery Science Theater 3000. There was a gentleman and his three robot pals who would watch typically very bad older movies and essentially poked fun at them the entire time. It was a sort of irreverent, bizarre show which came on late at night and probably only appealed to people in a certain state of mind. In its own way was kind of funny and kind of smart. Talk to me about this. Where did this idea to do this around AI hype come from?
Emily M. Bender:
It started with the group chat that persisted from that group that was writing those two papers. We weren't meeting actively anymore, but we still had a group chat. We would just use it to collectively and quietly roll our eyes at some of the worst stuff that we were seeing. We would just, among the five of us, whenever we saw something terrible, just throw it in that chat. Then I came across this really, really awful blog post, which is the subject of what turned out to be our first three episodes, and I said, "We've got to give this the Mystery Science Theater 3000 treatment. Who's in?" The funny little secret here is that I had actually never watched that show. I just knew the concept. I'm like, "Let's do this." Alex was all over it. Not only had Alex actually seen the show, but she also knows how to stream things over Twitch. What was it like on your side, Alex?
Alex Hanna:
Let me not give myself that much credit. First off, I'm a dedicated adherent of this show being raised by late night Comedy Central and an Adult Swim type programs and had watched the movie, I think version of it multiple times. It was really a learning as we go thing in terms of the streaming, learned basically how to use Twitch. I mean, the first episode is pretty rough because I think we tried to use a stream software and literally had to download one and relearn midstream and then it went from there. Luckily nowadays, we have a very talented friend of mine who's a radio producer, Kristy Taylor, who produces the show and does post-production and has been a phenomenal person to work with. It was a combination and we said, "Hey, let's just throw this thing up on Twitch, just because it had been a cool way of engaging. I think the format itself has been really great because it invited a group of dedicated fellow travelers to come along with us as we pick up new artifacts along the way and just poke holes at them and troll them mercilessly.
Emily M. Bender:
I think that part of it for me was I also will do deconstructions of AI hype in text form. Sometimes tweet threads or two threads and sometimes blog posts. This first blog post that we treat, which is something like, can machines learn how to behave? There was so much of it that I felt like it would've been just too tiring to go through in text, but could be really fun to do it reactively, especially with a buddy. I think it turned out really fun. When you listen to the first couple of episodes, you'll hear that we aren't planning this to be a long-term thing. Then later episodes, we're constantly surprised that we're still doing it.
Justin Hendrix:
Let me ask you a little bit about the reaction. Have any of the folks whose work you have deconstructed or perhaps in some cases, pilloried, have they reached out to you? Have you had any dialogue with any of these individuals? I should say, some of this work is coming from executives at big tech companies. I wouldn't assume they have particularly thick skin.
Alex Hanna:
I know, Emily, you've had a few reactions. I would say a lot of the people that come after us are already people that quite don't like us, for either daring to go at the notion of these technologies and not acknowledging the potentials of the technologies. There was a little bit of a Twitter kerfuffle of saying, okay, these technologies are very impressive in what they're doing. We can start from there and then criticize it. I would love to flip it and say, "Well, that's not the purpose of the podcast." These technologies have already legions of celebrators and sure, okay, they're doing some interesting things. We don't have to celebrate it to criticize them.
Emily M. Bender:
I don't know that I've actually gotten too much pushback around the podcast stream specifically. I get a lot around the Stochastic Parrots paper. I think for the most part, the folks that we have pilloried, and there has been some of that, have either not known or have decided that they just didn't want to draw attention to us. It'll be interesting to see. We started recording it in August of last year and now we're releasing those first episodes and we'll keep going as a podcast. It's probably going to get more traction than it's had as a Twitch stream. Those responses might be incoming.
Alex Hanna:
There have been some responses. I'm thinking about the Sparks of AGI paper and the lead author had reached out. It was mostly around removing the footnote to the paper that had been signed by 52 psychologists or something. It was this paper from, it was in 1994, that was effectively supporting Charles Murray's Bell Curve hypothesis. They had linked to this in intensely, in fact, very racist letter, effectively saying, based on IQ, white people are in aggregate smarter than black people. There's some essentialist reasoning behind that. The response was to remove the quote but not really remove the impulse of the whole study of intelligence and social intelligence and IQ, which has its eugenicist origins. It's like, okay, you remove the most egregious citation of this, but you are still legitimizing this whole area of research, which is intensely problematic. Missing the poorest for the trees right there.
Emily M. Bender:
Although I don't think that I can claim that as an effect of our show because I was also making noise about it on Twitter.
Alex Hanna:
Sure. Fair enough.
Justin Hendrix:
Well, let's take a look at one of these little pieces that you've put together. Do you have an example for us, something that we can look at in real time?
Emily M. Bender:
Yeah, I do. One of the things that we do starting in maybe episode four or five, is something called fresh AI hell. Each episode has the main course, some main thing we're looking at. Then we save a few minutes at the end to just look at random little tidbits of AI hype that we want to mock. Here's one that we haven't got to, so you can say, I saved this for tech policy press.
Alex Hanna:
I want to say about this fresh AI hell too. I think this is a comment one of us made. What fresh AI hell is this? Now something that I love that Emily's been doing, although it puts me on the spot, is we have a nice intro to the segment and Emily gives me a prompt and then sometimes doing some literal song and dance. Oh, gosh. This one. Okay. We'll do this in full style. Emily, do you want to read what we're looking at?
Emily M. Bender:
Sure. We're looking at a tweet of a screen cap of some other social medium and someone whose byline is consultant technology management has written. Well, I can tell you exactly when this was, so sometime in the last few months. I can suggest an equation that has the potential to impact the future. E equals MC squared, plus AI. It goes on, this equation combines Einstein's famous equation, E equals MC squared, which relates energy, E, to mass, M and the speed of light, C, with the addition of AI, artificial intelligence. By including AI in the equation, it symbolizes the increasing role of artificial intelligence in shaping and transforming our future, and think, oh no, there's one more sentence I got to read it. This equation highlights the potential for AI to unlock new forms of energy, enhance scientific discoveries and revolutionize various fields such as healthcare, transportation, and technology.
Alex Hanna:
The screenshot looks like it comes from LinkedIn, which is the most vapid of all social media platforms. In the quote tweet says, this is what you all sound like when hyping AI. I think I re-posted this, and many of the replies were like, "Well, it's not wrong. Because if in this case AI equals zero, then it has then gone ahead and said that AI itself is an empty signifier." It does symbolize the kind of hype that, I mean the way that this stuff is just used with no attention to the meaning of words and what these things are actually doing. These things get further and further divorce from any kind of semblance of reality and what these things are and what the technology is, that it's just really confounding. This is just hell. This is a hell kind of thing. There's nothing to deconstruct. This just sucks.
Emily M. Bender:
One thing that I found particularly striking about this one is that usually we're seeing it as the AI or the Matthew Math or the ChatGPT can be used in this healthcare context or it can be used in this education context. It's like computer science folks impinging on other areas of expertise as if there aren't people out there who know what they're doing. They don't usually go after physics.
Justin Hendrix:
I like this, because this is truly like galaxy brain, the scale thinking. It actually prompts me to ask you a question which is about the relationship between AI and faith or AI and the future. I feel like so much of this hype, and this one kind of does it to some extent, is driven by this thought that, this is the answer, folks. This is the answer to all the problems. This is what we've been waiting for. If you're not on the train towards solving all of the species' problems with this new technology, then you're standing in the way of progress. You're standing in the way of Valhalla. That comes through a little bit in this one. Isn't that kind of what's going on here a little bit? These are now articles of faith that AI is going to lead us to the Promised Land.
Emily M. Bender:
Absolutely. If you dare to speak against it, then you're branded as a heretic or what was it? Timnit Gebru was saying that she and I have been described as capability deniers.
Alex Hanna:
That's a new one. What does that mean? What is a capability in this construction?
Emily M. Bender:
My guess is that it's all the things that supposedly the large language models, ChatGPT, et al, the capabilities that they have, which are these supposed emergent abilities that are the sparks of AGI and so on. When we look at that and say, no, actually, you're fooling yourself by looking at plausible texts that came out. I've been starting to call it extruded from the text of the machine and making sense of it. Then people who really do have something of a religious fervor about this get upset and say, no, you're denying what is plain to see that this is really effectively magic.
Alex Hanna:
Capability deniers is a fascinating insult. Because capability, it strikes me as a bit of an ableist like insult too, because it's sort of, so much of technology doesn't work for disabled people. It's like, and yet this thing should solve this. It has this positive dual use quality that people, which I mean, as a sociologist, that is historically inflected. My tendency is like, well, how can we make historical parallels to what was going to be this amazing thing in the past? I try to find those connections. The idea of AI as being this faith-based article, it does definitely connect with other kinds of promises of automation, other kinds of promises of robotics, of things that are going to be a sort of intelligence that's going to be this intense labor-saving thing.
My sense is, the faith element, I think you're absolutely right, is a component of it. It is, if I'm allowed to sort of borrow a term from Althusser, that it's this ideological, oh, what is it? An ISA, an ideological appendage that sits on top of the massive amounts of wealth that are going into this field right now. I think McKinsey, people are trolling this McKinsey article. I think it said, AI in whatever, five years, is going to be worth $4.4 trillion. If you looked at this amount of money that's been tossed around in AI ideals, it's about 44 billion in the past five years. Pick your ideological structure that you want and take it as an article of faith that this is going to revolutionize something, but that's really buttressing this huge amount of money that's just gold coins being stacked one on top of each other precariously.
Justin Hendrix:
What do you hope? If you had the ability to tomorrow, let's say more of the industry, more of policymakers who are concerned with AI, woke up, maybe they discovered your podcast, maybe they discovered some of your thinking, your papers. How would we, as a society, be addressing AI at the moment? What would we be doing differently?
Emily M. Bender:
What I'm hoping, and we're starting to see some glimmers of this, I love it. Actually, I've heard this all through Alex, that various people have been sending our show to others who are in decision making positions. What I hope that we're going to see is much more critical thinking about what these technologies are. On the one hand, as consumers and as decision makers, we can say, no, actually that's not a source of information. No, actually that's not a replacement for therapy, et cetera. Then also, have the energy to look at the ways in which management is using this to displace and weaken labor. There's a whole set of harms that are happening in that space that I think are less exposed than they might be because everyone's looking at the shiny new toy. Although, huge props to the Writer's Guild who have made this essential issue in their strike. I think that is raising awareness.
Alex Hanna:
I would say from a decision maker perspective, I mean the good thing is that there are lots of moves towards this direction. There was a great letter from, let me make sure I can name all of them. The Consumer Finance Protection Bureau, the EEOC, the DOJ and FTC, put out a joint letter effectively saying, look, we've got the tools to effectively regulate elements of AI and AI hype as it is. You need to be very careful in claiming the capabilities of these systems and not to oversell them. We're going to pay attention to the ways that these things can exacerbate discrimination, the ways in which they can harm consumers.
That is some very nice signal, at least coming from regulatory bodies. As Emily said, it'd be great if there is a lot more attention paid to the labor component of this. The way that there is the threat of the way automation is going to harm and displaced workers. The hypers want to say that these things are going to make workers more efficient, that it's going to usher in a fully automated luxury communism phase where workers are going to be able to use that free time and pursue art and read Plato or whatever. That's not how late capitalism works.
Late capitalism works by driving down wages by even if these things work or not, by depressing wages or by devaluing work and firing people and then rehiring them to be automation babysitters. I'm hoping there's more attention that's paid to this by bodies that can improve labor conditions by the NLRB, by legislators and legislatures who pay attention to labor. I do think the writers have really highlighted that. Other folks who have been unionizing have highlighted that, including the former staff of the National Eating Disorders hotline, who unionized and then were promptly all laid off in favor of a chatbot named Tessa, which immediately failed in providing good actionable eating disorder advice.
Justin Hendrix:
Well, I should hope that perhaps via this podcast, some policymakers and others who may want to adopt a skeptical point of view will find Mystery AI Hype Theater 3000. It's hosts, Emily M. Bender, Alex Hanna, thank you so much for speaking to me.
Emily M. Bender:
Thank you. This has been a joy.
Alex Hanna:
Thank you, Justin. Appreciate it.