Home

Samuel Woolley on Manufacturing Consensus: Understanding Propaganda in the Age of Automation and Anonymity

Justin Hendrix / Jan 31, 2023

Audio of this conversation is available via your favorite podcast service.

Frequently on this podcast we come back to questions around information, misinformation, and disinformation. In this age of digital communications, the metaphorical flora and fauna of the information ecosystem are closely studied by scientists from a range of disciplines. We're joined in this episode by one such scientist who uses observation and ethnography as his method, bringing a particularly sharp eye to the study of propaganda, media manipulation, and how those in power— and those who seek power— use such tactics.

Samuel Woolley is the author of Manufacturing Consensus: Understanding Propaganda in the Age of Automation and Anonymity, just out this week from Yale University Press. He’s also the author of The Reality Game: How the Next Wave of Technology Will Break the Truth; co-author, with Nick Monaco, of a book on Bots; and co-editor, with Dr. Philip N. Howard, of a book on Computational Propaganda. At the University of Texas at Austin, he is an assistant professor in the School of Journalism and an assistant professor, by courtesy, in the School of Information; and he is also the project director for propaganda research at the Center for Media Engagement.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Sam, in the preface to this book, you say it is "a book about people. It is about the people who make, build, and leverage technology and attempt to manipulate public opinion. It's about their particular uses of the curious and mutable tools and strategies that are constantly evolving out of the rich primordial ooze of the internet, bots, sock puppet accounts, influencers, astroturfing, and so on." How long have you been working on this book and how would you describe the sort of methodology for it?

Samuel Woolley:

I've been working on it for a long time. The story of this book goes way back to 2013 when I started my PhD at the University of Washington. And at the time I was working with a mentor named Philip Howard. Phil Howard. We were working on this question related to computational propaganda and we were thinking through the ways in which bots and other forms of let's say other means of creating false personas online. Particularly on social media, we're playing a role in politics. And so that was where things started. It all started with bots and questions about bots.

As time went by, I became more and more interested with the people who were actually building bots or creating sock puppet accounts, which is human-run fake accounts or that were otherwise leveraging social media in order to try to exert some kind of power online. I was trained as an undergrad and master student before I went to the PhD in anthropology and then cultural studies, so naturally I kind of wanted to do ethnography, work with people, and then ask questions about power.

And to me there was a gap in our understanding of how different groups of people were using social media in attempts to get what they wanted in attempts to normalize this novel technology for control. It's a story as old as time, right? Anytime a new technology gets released, or at least a lot of the time when a new technology or new form of media gets released, people discuss how it's so exciting and it's going to liberate everyone. But then all of a sudden there's oftentimes a move towards the co-option of that tool by the powerful, by governments, by militaries, by companies and corporations and other folks that have more resources than the average person. And so that was the story I wanted to tell in this book, as well simultaneously as the story of, "Well, the internet's supposed to be a democratizing force, how has it democratized people's ability to try to spread their own forms of propaganda?" And so that's another subdiscussion in the book.

Justin Hendrix:

I want to talk a little bit about the kind of intellectual landscape you see yourself as lodging this book in. In the opening you talk quite a lot about names like Walter Lippmann, Edward Herman, Noam Chomsky, others that you sort of draw on, Edward Bernays even in his sort of foundational work on propaganda. Where do you see this fitting intellectually? What do you see is the sort of contribution here to the lineage of those thinkers?

Samuel Woolley:

Yeah, some big names there. Propaganda with this integral to the early study of social science with folks like Lippmann and Bernays and later on Jacques Ellul and Herman and Chomsky, the study of propaganda kind of fell off after around the 1980s, early '90s. There was still some work being done on it. Jowett and O'Donnell have quite a good book on this. But by and large, after World War II, it lost some of its luster as a field of study. I think that that's partially because of the way it was studied at the time. It was studied experimentally. There was an attempt to recreate propaganda in a lab setting.

When I started studying the use of bots and other forms of manipulation on social media, I realized that there was a discussion that was happening about the co-option of social media, but it was happening absent a theoretical framework. And to me, the work that had been done on propaganda was a very logical scaffolding via which to work. And so I went back to some of the early writing by the folks that we've mentioned and was astounded by how much it resonated with a lot of what I was seeing online. So the folks that we were mentioning, especially the ones that we're writing in the 1910s and '20s, and '30s, were talking about broadcast media and industrialization and the ways in which the newspaper and the television and the radio were... The television came later, but the radio were a force for the purposes of propaganda not just by governments but also by corporate actors too. I wanted to sort of talk about social media via those terms.

Herman and Chomsky were obviously a big influence and they were interned influenced by Lippmann who actually was the one that, for all intents and purposes, came up with this concept of manufacturing consent or engineering of ascent as it's put by them and earlier even by Gramsci. And so I wanted to ask, "Well, what does this look like in the era of social media?" Herman and Chomsky wrote a book, or redid their book Manufacturing Consensus for a new version in the early 2000s, but that was just before social media really kicked off. And so I wanted to add to those conversations and pick those conversations up.

Simultaneous to writing this book, there was a lot of conversations going on about disinformation and about misinformation. For me, the really interesting question there was power and intent and who was behind all of this stuff. I wanted to look at production. And so political economy obviously comes into it as does a lot of the early critical cultural work that was done by some of the folks. We've mentioned also a number of other folks too. And so that's really where this book comes in. Yes, it's speaking to the disinformation and influence operations oriented work that's happening now, but it's pointing out the fact that all of this work comes from a very long lineage of work on the manipulation of public opinion and that we should couch it as such.

Justin Hendrix:

Do you think we're in many ways just sort of rediscovering those ideas right now that to some extent, as you say, there was a sort of, I guess those long period in which for whatever reason there wasn't as much of a focus on it and perhaps now with land war in Europe and the rise of populists and in some ways the same context that you saw at the early part of the 20th century, we're just sort of rediscovering some of those ideas?

Samuel Woolley:

I think that's spot on. I think we are rediscovering some of these ideas. And then the necessity of studying propaganda or whatever we've called it and times gone by has come about at different times throughout history. The most recent sort of proximate time this happened was, yes, during World Wars I and II with a lot of the scholars that we talked about and then later during the Vietnam War. So yeah, there is a need to harken back to these people and say, "Hey, listen, this is the ways in which it was studied then." Of course, the paradigm that the people were speaking from was much more of a post-positivist kind of hard science perspective, thinking through how you would do experiments to show the effects of propaganda. And I take a much different approach. I'm much more interested in talking about this as a human-based phenomena, sociological phenomena. But at the same time, yeah, we must rely upon the work that's been done before rather than thinking that we're discovering something new. We're not discovering something new. The manipulation of public opinion has happened for a very long time.

We're talking about propaganda, but we're talking about propaganda being spread via new tools, and that is the crux of this book, right? It's a conversation about the ways in which these new tools allow for a different type of propaganda and also for a different level of amplifications and suppression of particular ideas.

Justin Hendrix:

So you use this phrase computational propaganda, talk about automation, algorithms, partisan, nano-influencers, and really kind of take us through how different types of groups have applied the ideas of computational propaganda. But let me just maybe take a step back. What do we know right now about the impact of political messaging or computational propaganda that has a kind of political purpose? Is this stuff effective? It seems like we get sort of mixed messages from the community of researchers who look at these things.

Samuel Woolley:

Sometimes it's hard to see the forest for the trees when you get into the research on this stuff. It's important to understand that the researchers that we're talking about have different perspectives on epistemology, ontology, axiology, that they're coming at this philosophically from different perspectives and also from different field of study and different paradigms. Within the social sciences, you've got political scientists and psychologists, for instance, who have been dominated by a post positive perspective on research that is in search oftentimes of effects, and oftentimes using quantitative research to determine behavioral effects in human populations.

When it comes to propaganda and social media, it's very difficult to track a particular strand of propaganda across the internet and then determine effects on either an individual or a group level. And so in many ways, discussions about effects, it particularly effects of the first order. Does propaganda or disinformation affect someone's vote or did it elect a president or something like that? Those are really difficult questions to answer, but there are questions that people ask. I think that those are the wrong questions to be asking. I think that those questions are unanswerable in the age of social media because of the problems of automation and anonymity, particularly that last one because of anonymity. You're relying upon a system that allows people to hide their tracks. It's a system where you can't really generalize a lot of the time because there's so many subgroups and websites and then parts of websites that we're discussing.

And so I'm much more interested in rather than trying to generalize, talking about the impacts of propaganda and computational propaganda upon particular subgroups. So I try to focus more narrowly. And what I can say about that is that computational propaganda has absolutely had an impact upon people like journalists, particularly journalists that are working in countries with limited media systems or autocracies. People in positions of power in countries like Turkey or countries like Libya or Myanmar have been able to leverage bots, but also groups of coordinated folks, sock puppet armies, to target people with whom they disagree and attack them to the extent that those journalists or human rights advocates or activists either leave the website or they have mental health issues, or they sometimes also simultaneously experience offline harm related to the online harm they've experienced, whether it's self-harm or whether it's actual harm from the government service power operators that are in question.

Simultaneously, a big part of my research right now is looking into the communities of color and diaspora communities both in the United States, but also around the world and the ways in which they're oftentimes unduly affected by computational propaganda. Unsurprisingly, maybe it's often the most marginalized groups in our community who are temped down by these kinds of mechanisms. They are targeted and their voices are taken away. And so circling back to the original question, what's the impact of computational propaganda, I think if we think in really broad general terms, it's always going to be hard to talk about impact. People think they have some kind of Trump card when they bring that up, but what I say to them is, "If you start focusing on subgroups or sub-communities or specific spaces online, you're going to find an impact right away."

Justin Hendrix:

So it's almost a forest for the trees kind of thing to some extent. But you say, "To understand political influence in a digital world, we can't focus on tracking pure, empirically evidence behavioral outcomes. Direct notions of change is defined by traditional political science or psychology." You point to folks like Kate Starbird who suggest we should look at the sort of second order changes, the kind of network cascades, the, I guess, micro movements of people and groups in a more kind of, I guess, systematic way.

Samuel Woolley:

Yeah, Kate Starbird's work has been really influential on my work. Kate was an assistant professor when I was at University of Washington, and I remember her work on rumors back then was I found it really, really interesting. It's only gotten more nuanced over time. These ideas that Kate's talked about, and she and I have had conversations together about second order effects or third order effects are really, really important. So how does computational propaganda or the use of bots and other algorithmically or automated technology online, how does it benefit particular groups? How does it benefit extremists? How does it benefit folks that are trying to troll a group of journalists? How does it benefit someone that's trying to game a online survey or an online poll? And if you look to those spaces, you're much more able to discern effect than you are if you're like a political scientist trying to find out whether or not Russian propaganda online change the outcome of the election.

Justin Hendrix:

I want to make sure that the listener understands kind of how you went about this book. You say that you spoke to a variety of people in political groups in North America, Europe, North Africa, the Middle East. These are bot makers, people doing PR, dark PR on some level, folks who are working for politicians and autocrats. Why in the world did these people talk to you, Sam?

Samuel Woolley:

Yeah, it took a long time to get them to talk to me. In the beginning, they didn't want to. You have to understand that I began this work prior to 2016 when the cat kind of came out of the bag with Cambridge Analytica in the United States and Russian interference in the US election and Russian interference during Brexit. And so I was able to plant some seeds early on with these communities in order to talk to them about their practices, leveraging bots online to try to manipulate public opinion because they were really proud of their work then. They thought that what they were doing was novel. They thought they were kind of winning on this system. They were gaming the system, and so they wanted to discuss it.

Later on, once people became more savvy to what was happening online and the fact that there was various groups online trying to push conversations one way or another, it became more difficult. But the way that I got them to talk to me later was by finding X employees of companies. I used LinkedIn a lot to find people who had previously worked on particular political campaigns. I talked to folks who were in prison for doing the kinds of things that we're talking about. I talked to folks who had previously worked for social media companies who were having a bit of a mea culpa moment who wanted to talk about what had happened. Granted, all of these sources of data have their own drawbacks. Part of the work that I was doing with this book was having to separate the wheat from the chaff and separate people's biases from what actually happened, which is always part of qualitative work.

But I think they wanted to talk to me because by and large, on the one hand they felt that either politically they were correct in doing what they were doing, that what they were doing was politically correct because oftentimes it wasn't, but they believed in what they were doing. So take for instance like BJP, IT Yodas working on behalf of the Modi regime. They believed in it, and so they believed what they did was helping their party or their candidate or their political system. On the other hand, folks were able to make a lot of money doing this. And so they were proud of the fact that they'd become rich off of this or that they had figured out how to monetize this in such a way that they were able to have success online.

And so there was a lot of arrogance amongst the people that I talked to, a lot of bragging. The thing about qualitative work of this kind is once you get someone talking, they almost forget they're talking to a researcher. They forget they're talking to someone who wants to understand this stuff because of an interest in power and politics. And so it's snowballed from something that was really small in the beginning to now having a list of people that I can call at any given moment that I think are very likely to talk to me, including here in the United States.

Justin Hendrix:

I was struck by the fact that you open Chapter II in India and you focus specifically on the kind of massive operation in favor of Narendra Modi's campaign. What do you think about India in particular? What does that particular kind of environment tell us about these issues?

Samuel Woolley:

India's a fascinating case study for a variety of reasons. On the one hand, what's happening in India with the populism there and sort of you might call it religious extremism, there is in parallel with what we're seeing in other areas in the world including some of what we've seen in the United States and places like Brazil and Turkey. On the other hand, the playing field for where propaganda is happening is pretty different than it is in a lot of other countries. Or at least what I mean by that is that a lot of the conversations that are happening and a lot of the propaganda that's being spread are being spread over WhatsApp rather than on Facebook or Twitter. It's not to say that Facebook, Twitter and the other major platforms don't have a role. They certainly do. It's just that Modi and the BJP and the thousands of people that they have working for them to create content on WhatsApp have perfected a system over a closed network messaging app, an encrypted messaging app. And so I found that really fascinating from the get-go.

It was also really interesting to talk to these people because a lot of them were at a lower level. I mentioned IT Yoddhas. Yoddha means 'warrior' in Sanskrit or 'combatant' in Sanskrit. These are people who are volunteering a lot of times for the BJP to use hundreds of different cell phones. They have what you call racked cell phones to spread messages across WhatsApp groups, not using forwarding, but because they're actually members of those groups. So that's actually another point about India that's really fascinating, which is that the BJP started building infrastructure to spread its propaganda well before WhatsApp made decisions about that they thought would limit the spread of disinformation and manipulation on their platform, i.e, limiting the amount of time someone could forward a given message from group to group.

The respondents of the interviewees that we spoke to at my lab said actually that a lot of the limitations that WhatsApp put upon its platform benefited the BJP because the BJP had already built capacity across what they say are millions of groups on WhatsApp. So they don't need to forward. They actually have people working within all of those groups. And so any new political party or someone that's trying to challenge them has a really, really difficult time because they can't use particular kinds of amplification to get to where they need to be.

Justin Hendrix:

That's a fascinating idea that there's a kind of first mover advantage and almost a kind of lock in effect that is at play there.

Samuel Woolley:

We've been grandfathered in basically as the propaganda who can run this system.

Justin Hendrix:

I want to talk about the three levels of manufacturing consensus. You talk about how these various techniques and ideas are interconnected. You say at the center of the circle are computational automated tools made available by the rise of social media and the social shift towards digitalism. So we've got three types, political bots, sock puppets partisan, nano-influencer based efforts. We've got social media, algorithm, recommendation and trend-based efforts, and we've got news media-based efforts. Can you talk a little bit about this framework?

Samuel Woolley:

Yeah, sure. So going back to the early days of this work in 2013, '14, '15, we were studying what was basically rampant use of bots for the spreading of political content, but also commercial content online. It still happens today, it's still a huge point of discussion. You'll see Elon Musk discussing it nearly every other day it seems like, the use of spam bots and crypto bots and stuff like that. But in the early days, there was basically... And anything goes perspective across most of the platforms for the use of bots. They might have had limitations in their terms of service about bot usage, but their actions in terms of deleting bots were kind of low. It wasn't until later that they got more serious on that front.

And so what bots allowed people to do was create the illusion of popularity for a movement. So a really great early example of this was during the Scott Brown runoff election in Massachusetts. A group of tea party activists at that time built several hundred bot campaigns that were built specifically to target his opponent, Martha Coakley. What they levied against her was allegations that she was anti-Catholic. And so they're able to make it look like suddenly hundreds of people online were saying that cochlea was anti-Catholic in an election in Massachusetts, which is a pretty damning accusation.

Later on, what happened was regular people started picking this up, other kinds of folks online including influencers started talking about this, and then also newspaper started writing about it. So you kind of start to see how this happens at multiple levels. Over the course of time, bots began to be deleted in higher rates. They're still online, they still play a crucial role in the spread of propaganda, but sock puppets stayed true. These are human-run accounts. They're harder to delete, they're harder to suss out whether or not they're human. And so there's questions about free speech there. And sock puppets kept getting used. Sock puppets still get used today to a big degree to organize groups of humans working to do the same thing as bots were doing, but just on a smaller scale.

In both of these circumstances with bots and sock puppets, a lot of the time they weren't trying to actually change the opinions of people by engaging in arguments with folks, they were trying to change what the algorithms on the sites were recommending or saying was trending. So there was a conversation that was going on not between the bots or the sock puppets and people oftentimes, but between the bots and the sock puppets and the algorithms behind sites like Facebook, Twitter, YouTube, in order to try to again manufacture consensus, create the illusion of popularity for particular ideas.

One of the examples I always give is the day of the Parkland shooting in Florida, bots and sock puppets converged around a hashtag that alleged that David Hogg and his fellows were crisis actors. And later on that day, that was the number one trending hashtag on YouTube's homepage, which is a great example of the ways in which these kinds of folks understood that if you leverage these tools, you could game the system in your own favor. But the crucial next step is that then journalists would pick up these stories oftentimes in well-meaning ways, sometimes they knew that it was gamed, but oftentimes in well-meaning ways, spread them as if they were truth. And so that story about David Hogg being a crisis actor suddenly got reported on by multiple different outlets because they were looking to social media trends as a mechanism for their reporting.

The other thing that I haven't mentioned, which you asked about, is these partisan nano influencers. I see the use of nano influencers as a logical next step in the kind of computational propaganda framework. Bots were really useful at bluntly amplifying particular streams of content early on along with sock puppets. Sock puppets just didn't allow for the same sort of reach. Partisan nano influencers, people with under 10,000 or so followers but that have a very specific following, and so they have a little bit more psychological impact, now what we're seeing is that political campaigns are paying these small scale users to spread particular streams of content in very overtly coordinated ways. They'll give them particular scripts to talk on, they'll give them particular hashtags to talk about.

And so they're gaming the system, but through a different mechanism. That mechanism is really interesting because it is really difficult to moderate and to create policy around, "What does it look like to tell influencers they can't say specific things? How do we limit them? If they're being paid by a campaign, are they being transparent about being paid by a campaign? What if they're not being paid by a campaign and that they're just doing this because it gets them more likes and more engagement?" So now we're in this brave new world in which I think influencers are going to play a really big role in the spread of propaganda if we don't do something pretty quick.

Justin Hendrix:

So I want to come back to the role of the social media platforms, the technology firms themselves. But since you've focused a little bit in those last comments on the role of journalists, I want to kind of go there first. You say the fact is journalists play a very real role in perpetuating manufactured consensus. Not always, not even usually done consciously, and sometimes it happens outside of the realm of the work of any individual reporter. But how do you see the kind of world of journalism as it were, in its own crisis I suppose, interacting with computational propaganda and the broader idea here of manufactured consensus?

Samuel Woolley:

If there's anywhere in the book that I draw the most from Herman and Chomsky, it's this chapter on journalists and journalism and the institution of news making. Broadly speaking, the argument that I'm making is that social media have caused massive changes in the world of journalism. No one's going to be surprised by that in the creation of the news. Journalists have to do their jobs differently now. News entities are massively challenged by social media, which are also monetized via advertising, and thus, in many ways, competitors of news organizations.

News organizations have also had to change the way they write headlines, the way they write stories, the way they appeal to people because of the fact that most people access most of their information online these days either via Google, via Facebook, via Twitter. And so there's this push and pull going on between social media and news. And the news industry tends to get the short end of the stick. I'm not saying people should necessarily feel sorry for the news industry. The news industry has had a lot of power over the course of the last 100 years, but I do think people should feel sorry for the journalists that are in the crosshairs of this stuff. Journalists are oftentimes working on very tight deadlines. They're working to get more clicks on their stories than their competitors. They're working to get their stuff noticed, and they're also leveraging new tools to find stories.

And so, you see the rise of computational journalism or data journalism where people are basically counting things online in order to tell particular stories. There's some really good examples of this work being done critically, fantastically, speaking truth to power. But there's a lot of examples of this kind of work being done in such a way that it actually perpetuates the exact same kinds of gaming of social media systems that we see happening via bots, via influencers and those sorts of things.

So sloppy journalism that, say for instance, quotes tweets and puts tweet, embeds tweets in the story and doesn't do any work to verify whether or not those accounts are real is a great example of a very facile way that journalism early on perpetuated these kinds of problems. To this day, there's still well-respected news entities that embed tweets from random profiles in their stories as if that's exemplary or the same somehow is going to a real source and knowing who they are and talking to them.

On a broader scale, trends and recommendation algorithms have impacted the way that news stories get told. In many ways, journalists hop onto stories oftentimes because there's traffic surrounding a particular topic online. But what we know about online, what I learned about the online sphere in this book and over the course of the last nearly decade of research, is that it's pretty easy to manufacture stories and trends online with the intention of not just getting it to spread via the algorithm and then across the internet, but also to dupe journalists and newsmakers into spreading that content and legitimizing it through "mainstream media."

By no means do I want to attack the press. I think that journalists do such important work in our society that I know so many fantastic people that work in the news. But I do think that it's caused for a renewed sense of care and a renewed sense of ethics in journalism. I think that it's caused for some deep navel gazing about what role social media play in journalists telling the stories that they want to tell. I think that it is caused for journalists needing to have a very critical eye when they look to social media as a source for either information or stories or what have you.

Justin Hendrix:

Let's step back for a second and just talk about the nature of these Skinner boxes, these big social media platforms, and the makers of those. You talk to one former employee. To some extent, it doesn't sound terribly different from other whistleblowers or disaffected former social media employees that we hear from, it seems like, in an ever increasing drumbeat these days. But just real concern about the incentives and the profit motives and the scale and in general, a kind of almost pessimism that they'll ever do anything to really address the game mechanics at play here.

Samuel Woolley:

If I had a dollar for every conversation I'd had with a current or former social media employee that took this perspective, the perspective that's been exemplified by some of the whistleblowers but also just by other folks that have talked to journalists over the course of time, I'd be a rich person. From the perspective of most employees at these companies, they're kind of working on runaway trains. I've heard so many metaphors used by the employees themselves. The other metaphor that I used to hear used a lot was it's like we're working on the plane while the plane's already being flown or whatever it is. We're trying to build the plane while the plane's being flown.

But the perspective by and large amongst a lot of folks that I talked to from these companies was, yeah, we created these systems. It was a move fast and break things mentality. That was literally the motto at Facebook. Let's see what we can do. There wasn't a lot of thought about consequences. There wasn't a lot of thought about whether or not we would regret the things we were building. The goal was just to build and see what happened. And therefore, it's really no surprise that we ended up where we did because these informational systems are just that. They're systems where people go to get their news. There's systems where people go to learn about the world for better or worse. Even if people disagree with that and say, "Well, who gets their news from social media?" You need do no more than turn to pew and look at some of the statistics on the fact that people get a huge proportion of their news information from these websites now.

The profit motivation of these organizations, particularly Facebook and YouTube and those like them is driven by advertising. One of the things that I learned through this research is that there's also the usage of bots in a big way to drive up ad numbers and to drive up clicks and views on ads. And so it kind of undermines the bottom line in a big way. But once they got into it and once they started growing at the clip they did, it was almost too late to go back.

People like Mike Ananny who's a professor at USC, a very brilliant guy, he said something which I think is really germane of this conversation, which is, 'Should we feel sorry for the social media companies because they grew so fast? Because they have billions of users, and now they have a hard time moderating the content and they feel like they can't do anything to solve the problem?' And for him, the answer is no. And for me the answer is no as well. They grew at the rate that they grew. They made decisions that they made. They either willfully ignored or were just oblivious to the problems that existed or knew and went ahead anyway and now we are where we are today. So that's the story that gets told by this one particular person, but has also been told to me by a number of other folks as well.

Justin Hendrix:

Towards the end of the book, you talk about the idea of what's next for ethnography and for this kind of research approach. You talk about an ethnography of information and the idea of spending time with non-human or semi-human actors. So actually immersing yourself in this world of bots and what looks like human behavior. I was struck by the idea that this is a book that's come out right at this moment of hype around generative AI and chatGPT and these types of large language models that appear to be, if not passing the touring test, certainly fooling a lot of folks. What do you make of that? How do you think you'll be able to do ethnography in online spaces in the future when perhaps some of those actors are in fact AIs?

Samuel Woolley:

That's a great question. So this book always goes back to my fascination with bots. In fact, last year with Nick Monaco over at Microsoft, I had a book called Bots that came out that we co-authored. And so it's near and dear to my heart. I think that the internet has always been a place of hybrid identities. I think that automation has always played a role online. There's always been fairly sophisticated, if you want to call it that, chatbots that have existed back to ELIZA and others like her.

ChatGPT in many ways was not a surprising thing to me when it happened. It was something that I knew had been going on in Silicon Valley and elsewhere for a while, the creation of these kinds of very much smart machine learning bots. I think that it's crucial that listeners understand that bots are always at their essence created by people. They always have parts of the people who make them inside of them. They are a human creation. They tend to look a lot more like C-3PO than they do like R2-D2 in the words of one person I spoke to for my book who knows bots very well.

So we oftentimes build them in our own image. And because of that, I think that it's really important to study them as actors in systems that are human systems or social systems. Sometimes bots get away from their makers, sometimes bots do things that are unexpected. I think those are delightful, worrying, scary times that are very worthy of our study. But I do think that researchers have to start asking themselves crucial questions, not just about studying the content that bots produce, but studying the bots themselves, studying the creators of those bots, studying the motivations for building the bots, all sorts of things like that. Because in the production of these things lies a power dynamic and lies a story. That story's got to be told.

People like Kate Crawford and danah boyd and Nancy Baym and Tarleton Gillespie and the folks at Microsoft Social Media Collective have done amazing work to talk about sociology of technology, the role that technology plays in our society, and how technology can be a driver of sociality, of social interaction, of power. And so that's what I'm calling for at the end of the book. I'm saying, "Hey, listen, it's great to talk to the producers of bots, but also we can do ethnography of bots themselves. We can spend time with them. You can either even build bots and interact with other bots and study that too. And so why don't we do that?"

Justin Hendrix:

Last question for you. We're seeing a kind of, I guess, politicization of even the study of disinformation or the study of issues like the ones that you address here in your book with various academics kind of under scrutiny or being FOIA'd in state schools like the one perhaps that you work at. The study itself is kind of in the crosshairs of disinformation artists on some level, and those who may legitimately have political qualms with some of the outcomes of studying these issues. What do you make of that? How does that affect your work? How does it affect your thinking?

Samuel Woolley:

I mean, it's not surprising to me. This work is inherently political. We're studying the use of social media in these cases as a mechanism for control. And so it's important work, it's work that shines a spotlight on some really bad practices that have allowed some people to have an edge for quite a long time. And so it's unsurprising to me that some of the people that had that edge are lashing out and trying to push back. At the same time, as someone who specifically tries to focus on the study of propaganda, given its lineage and the perspectives on it rather than just disinformation, I understand some of the qualms that people have about the indexing of information or claims about what is good quality information versus bad quality information. I think at the heart of this is a tension between free speech and between privacy and safety and authenticity online and offline as well.

I think that in the next couple of years, we're going to see a lot of push and pull between these issues. I think that policies will have to take into consideration all of the issues. I don't think that research that works to index news sites and say, This news site's a good one. This news site's a bad one" can ever escape bias. If you read this book, you'll see that I completely acknowledge the fact that researchers have a lot of bias, but that doesn't mean that there's not value in this work. It doesn't mean that there's not a huge amount of work to still be done.

I don't think that we can let ourselves be stopped, but I do think that we have to be careful in what kind of claims that we make with the work that we do. I think that we have to work to build technological systems that are democratic in nature, that are motivated by human rights. And if we do that rather than build systems that only push forward the motivations of the powerful, then we'll be in a better circumstance if we're able to build technology truly for democracy and human rights. And so I think that's what a lot of the researchers are asking for. I do think that the hype around disinformation research has been a little bit overblown. At the same time, there's a reason why people have focused on it, and it's because it is a massive, massive problem around the world.

Justin Hendrix:

Well, the book's called Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity. Sam Woolley, thanks so much.

Samuel Woolley:

Thank you very much, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics