Home

Donate

How to Counter Disinformation Based on Science

Justin Hendrix, Dean Jackson / Feb 25, 2024

Audio of this conversation is available via your favorite podcast service.

If you’ve been listening to the Tech Policy Press podcast for a while, you know we’ve spent countless hours together talking about the problems of mis- and disinformation, and what to do about them. And, we’ve tried to focus on the science, on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion.

Today’s guests are Jon Bateman and Dean Jackson, who just produced a report for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation, and provides evidence that should guide policy in governments and at technology platforms.

  • Jon Bateman is a senior fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. His research areas include disinformation, cyber operations, artificial Bateman previously was special assistant to Chairman of the Joint Chiefs of Staff General Joseph F. Dunford, Jr. He has also helped craft policy for military cyber operations in the Office of the Secretary of Defense, and was a senior intelligence analyst at the Defense Intelligence Agency.
  • Dean Jackson is principal of Public Circle Research & Consulting and a specialist in democracy, media, and technology. In 2023, he was named an inaugural Tech Policy Press reporting fellow and an affiliate fellow with the Propaganda Research Lab at the University of Texas at Austin. Previously, he was an investigative analyst with the Select Committee to Investigate the January 6th Attack on the U.S. Capitol and project manager of the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace.

A transcript of this conversation is forthcoming.

Justin Hendrix:

Good morning. I'm Justin Hendrix, editor of Tech Policy Press, a non-profit media venture intended to provoke new ideas, debate, and discussion at the intersection of technology and democracy. If you've been listening to this podcast for a while, you know we spent countless hours together talking about the problems of mis and disinformation and what to do about them, and we've tried to focus on the science on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion.

Today's guests have just produced a report for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation and provides evidence that should guide policy and governments and the technology platforms. I caught up with the authors earlier this month.

Dean Jackson:

I'm Dean Jackson. I am the principal of Public Circle Research and Consulting, and I am an inaugural reporting fellow with Tech Policy Press.

Jon Bateman:

My name is Jon Bateman, I'm a senior fellow at the Carnegie Endowment for International Peace.

Justin Hendrix:

The two of you have just dropped 130-page report countering disinformation effectively and evidence-based policy guide. Before we get into some of the details of it, I just want to talk about the moment into which you have released this report. We're headed into a global election year. There are countless elections taking place in democracies around the world. Indonesia has just gone to the polls as the moment that I'm speaking to you, but the debate over the role of disinformation in elections and on social media has changed. There are, I suppose it's fair to say, naysayers, those who disagree with the framing of the problem, those who disagree with some of the tactics that have been used to address disinformation, concerns that some of the solutions are worse than the problem. How do you think about that moment in the discourse in which you're releasing this report?

Jon Bateman:

I think you're absolutely right about the changing discourse, and that was a key reason why we wanted to write this report. It seems like there's a huge divide between a group of people, and maybe this could even be called the establishmentarian perspective in a lot of liberal democracies now, that defined disinformation as one of the essential challenges of our time, maybe the most urgent question facing democracies and something that infects all policy issues. We just saw the World Economic Forum come out with something saying that disinformation, misinformation is the top problem facing the world. That got a lot of rounds on Twitter, people kibitzing about it, so that's one point of view.

Then another point of view is that the idea of disinformation even beyond just the term itself, it's poor framing of the problem. It's not actually what's happening or maybe it is a realistic problem, but it's so elastic, potential for bias, lack of clear definitions and, of course, the use of the term political discourse, it's become a term of all purpose abuse. So now, if you're a candidate or a public official, instead of saying, "Hey, you're wrong about me," you'd say, "Well, I'm the victim of a disinformation campaign or a misinformation campaign."

What Dean and I wanted to do in this report is speak to both sides of this divide. We think that this is a serious problem if only because there is a group of core cases like the Stop to Steal movement that I think any credible person could look at and say that is wittingly shared false information that is having a huge destabilizing effect on the United States and other democracies, and yet policymakers need evidence-based tools, but they also need to understand the conceptual and empirical limits of what they're working with.

Dean Jackson:

I want to really commend the introduction, which I did work on, so in a way I'm putting myself on the back, but John did a lot of work and got a lot of feedback to hone and sharpen because it lays out what I think are a lot of good faith criticisms of what's become the counter disinformation field. There are arguments that it's an attempt to assert authority over what's true, and false and those are always political. Even if the information can be verified, what institutions people are asked to listen to and believe, that's a political appeal.

I think what's interesting to me about the report is the solutions that we try to evaluate. They're all limited because the problem is perennial. It's not really new that political actors lie. The effects of those lies, the way they're distributed, received by the public, the ability of other sources of information to counteract them, that's all changing with changes across society, technological shifts. I think that's worth interrogating and less than getting people to believe the right facts, I think getting to an information space where that relationship is healthier is the objective of many of these interventions.

Then there are some shorter term tactical things that try to mitigate harm in the meantime, but none of them really are expected to put the problem back in any box and be able to walk away from it and say, "We're done."

Now, there are also, I think it's important to call out, bad faith critiques of the field, arguments that all of this is just an attempted censorship or an attempt to pay political opponents who's dishonest when they're not. That's also a political play. That's also an attempt to assert control over the truth.

Hopefully what this report does, I think, I'm hoping for three things. One is to say, okay, it matters how the information landscape is structured. Let's get out of the back and forth over individual pieces of content, individual disputes over viewpoint, and talk about what a healthy information and media landscape looks like. But then I really also hope that it calls attention to the large gaps in evidence for how we do that. There are a lot of assumptions that this is a tech problem, that if we can just get the techno legal framework right, all of this will be so perfectly calibrated and we'll go back to some perceived golden age.

I don't think that's totally true. I think tech plays a big role, and I've talked about how big that role is and that's one of the reasons I think it's so important to get more evidence on that question, but some of this is societal comes from other trends, other drivers, they're big problems, big questions, and we need more good social science on a lot of these areas to enable the types of policy actions and interventions that might get us to a better place.

Jon Bateman:

I would just make maybe two overarching points. One is with all of the ideas swirling that you just highlighted, we do think that the counter disinformation field is at a moment where it's ripe for a wide angle thoughtful reassessment. We're not saying that our report is the definitive reassessment, but we hope that it can contribute to this conversation. I'm not sure that some of the critiques that we're lobbying in this report would have been as well received in an earlier time when maybe there was more groupthink in the field.

The other thing that I would say is that as disinformation countermeasures have become such a major policy area, it's led to the growth and empowerment of technocrats and experts, people like me and Dean and our equivalents who work in platforms and government agencies. So we want to speak to those technocrats and we want to both give them a shot in the arm and a poke in the eye at the same time. We want to arm them with evidence, but we also want to challenge them to take all these critiques very seriously and be very humble about their behavior.

Justin Hendrix:

One of the things I like about this report, and you just got into it, is the fact that what you're trying to do is distill empirical research as a way to inform what those technocrats get up to. One thing I've found that's true in the course of teaching a course on tech media and democracy over the last seven years is that we do have the benefit now of a great deal of science on how various interventions work. Even at this point, some meta studies and other reviews that collect that empirical evidence make it more consumable and perhaps more actionable.

This is a difficult space. Variables change and shift. It's hard to get a clear read sometimes on even what are the baselines of the problem, what are the prevalence of some of the issues that we're trying to address in this case, but you seem to have done a good job, I think, of pulling those things together. Well, just speak quickly to the methodology for this report, how you went about it, how long it took to put this together, and I'll point out to my readers, there's 130 pages here, a good amount of which is citations.

Dean Jackson:

We assessed 10 interventions. That is obviously not the full universe of interventions. You can bundle them in different ways, so we had to start by bounding what we were going to look at and we didn't want to do that in isolation, so we held a couple of big expert meetings where we got 20 to 30 people on a call, walked them through what we were trying to accomplish. We wanted to look at how much was known about these interventions, how effective the evidence available suggests they might be, but then also the resources required to scale them, and we said, "What's the universe of options here?"

At one point, I think we had a list of a couple dozen, and we said, "That's far too many for us to assess, but let's see which ones can be combined, which ones could be pared down," and we solicited a lot of expert feedback about how to do that and how to get this list down to a manageable number, which in the end was 10.

After that, we did start looking at the literature around each of them. We produced a series of case studies and we tried different ways to continue to be in conversation with other scholars, other experts, other practitioners to inform those and make sure that, again, we weren't just doing this in isolation, putting our own spin on it, failing to cover our blind spots. So for some of them, that meant additional readings, additional roundtables. For some of them, that meant other ways of getting feedback. We asked for a couple of really deep scrubs on some of the case studies, and all of them I think at one point or other got some form of peer review.

So over the course of several months, we collected literature, reviewed it. For some of them, it was pretty difficult to find literature or to frame how to assess the literature that's out there. I think a lot about the case study on removing inauthentic asset networks where the big problem is actually there's information out there, but it only tells part of the story. All of them were a unique challenge in their own way, but in the end, we ended up with 10. We had a lot of conversation about how to assess those 10 against the three variables, which we discussed, which are all laid out here in a chart, and from there, we drew our conclusions once we had a sense of the landscape.

Justin Hendrix:

So those three dimensions are how much is known about this particular intervention, how effective does it seem. Perhaps even in the phrasing of that, pointing to the fact that in some cases we don't know precisely how effective an intervention might be, how easily does it scale, which is another key question. We can't go through all of these in a podcast. I'll send the listener to the report if you want to hit all 10, but let's talk about two or three of them in particular. I want to start with number one because you start perhaps in a place that some folks won't expect, which is supporting local journalism, that you say a modest amount is known about this particular intervention seems like it might be significantly effective and yet very difficult to scale. Why did you start with local journalism?

Jon Bateman:

I would say that was a quasi intentional choice. We do have different interventions that focus on platform activity, like Dean mentioned the take down of illicit asset networks, reforming algorithms, things of that nature. That's getting a lot of attention already. If you work in this space, what is going on in and around platforms, it's important. No one would contest that, but it soaks up a lot of attention and energy. Meanwhile, I think there also is this lower level awareness that the information environment is more than just platforms and that, in particular, traditional media, even TV news has a huge impact on people's perceptions, behaviors, political discourse, and even the incentives of politicians and other leaders to say the things that they do and act the ways that they do. So that seemed like a logical starting point in a way.

The other thing I would say is that many of the not only platform-based, but other kinds of measures that people are interested in, including various forms of fact-checking, again, get a lot of studies, a lot of attention to fact-checking, that's the most studied intervention that we found. I don't want to diminish them by calling them tactical or narrow, but they are more in that whack-a-mole realm of chasing down a lie and trying to debunk it. We felt that it was really important to spotlight some of the slow-burning, more ambitious efforts to tackle the societal roots of some of these problems, and one of this is just the lack of access to good quality, authoritative, locally trusted information about matters in your community and the nation and the world.

Dean Jackson:

My favorite way of summarizing this case study is there's a pretty good amount of academic literature where local journalism is healthy, it provides civic benefits. You find in towns with a healthy local newspaper, you find less polarization and you can measure that by the amount of split ticket voting, for example. You find more civic participation, higher rates of voting, lower rates of corruption. There's all kinds of good benefits from local journalism. The way in which the literature is limited is by what I started to call the Humpty Dumpty problem, which is once you lose that, once the local newspaper dies, once there are layoffs, once everyone starts getting their news from national news sources or social media pages, can you put Humpty Dumpty together again? Can an infusion of money into a new let's say a nonprofit news outlet, can that restore those specific benefits or does the tenor of public discourse change in ways that are durable and long-lasting even after you've tried to replace your legacy media?

I don't think there's as much research out of that. I would love to be proven wrong and to see evaluations of all the many philanthropic efforts that are out there now to prop up the traditional media sector. I'm sure some people are doing that. The valuation is a tricky game. Some philanthropies do it better than others, but I think that's the multi-million dollar question here. Can we get what we want out of reinvigorating a local news sector after it's been left the wither?

Justin Hendrix:

It does seem to me unclear that all of those efforts, whether philanthropic or otherwise, well, I should say it is actually clear that all of those philanthropic efforts and otherwise seem not to have been sufficient to certainly replace the fabric of journalism as it was perhaps intact prior to the internet. We'll see. You address various different interventions that are proposed around the world on that. You don't seem to pick one, of course, and reckon that there are political problems with many, including news media, bargaining codes, and other types of interventions, but I commend you, I think, for putting that one first. Does seem like a substantial problem, one that's really hard to measure the impact and yet, as you say, significant.

You've already hit on fact-checking. You point out, I think interestingly, something I didn't know, which is that the number of fact-checking organizations in the world has flattened off. There was a steep rise after 2016, and we're now at a place where perhaps we've reached the ceiling. That has something to do with funding, I'm sure. The tech platforms are the big funders of fact-checking around the world. I don't know if there's anything you want to add on that, but I'd love to get into labels as well, which I see is connected to this question around fact-checking.

Jon Bateman:

They're very overlapping and we sometimes struggle to disaggregate them. The one thing I'll say about fact-checking, which is interesting, is that, and hats off Dean, we spent more than a year on this project. Dean literally read, actually read, hundreds of scientific studies for this research, and a huge portion of those were in fact-checking. So that gives us some confidence that we know that fact-checking is reasonably effective, but it's also interesting to then think about, "Well, if fact-checking is the high watermark of how well an intervention is currently understood across all of these options, how much do we still not know about fact-checking and what does that tell us about the generational task of filling some of our knowledge gaps in other areas?"

I do find that striking that we can validate fact-checking as reasonably effective and yet the context, the framing, the word choice, the presentation, the colors, the voice being brought to that fact-checking the timeliness, all of those things we know are crucial. We don't really know that much about how. So I would just say the fact-checking study for me was bracing in that it let us know that for all of these other interventions, we will probably be in somewhat of an ignorant limbo for maybe our whole lives, certainly many years, and hence the need to do the best that we can with very limited information.

Dean Jackson:

The fact-checking study is illuminating in a lot of ways because it speaks to how much having a clear objective is an efficiency booster. We know actually we have a lot of pretty granular studies on fact-checking, how does phrasing matter, how do emotional appeals matter, how has it received when the issue is highly partisan, but a big open question is, what are you trying to accomplish by funding all of these fact-checkers? Is it enough if voters can accurately cite the number of people crossing the US border every day? Does that achieve the objective for fact-checkers if that number is not wildly overestimated by voters or do you want to somehow then get them to come to a series of policy conclusions based on accurate information?

If what you want is to change their view on political issues or on candidates, there's a lot of evidence that fact-checking doesn't move the needle all that much. People will often, and sometimes there are even studies where they pay people, people take quizzes and are paid for correct answers so you know they're not just answering in a partisan fashion, and people can, fact-checking can give people corrections to incorrect claims, and then those people can repeat those corrections later. Does that move the needle on their attitudes toward political figures or on political issues? Much less often.

So it gets back in some ways to the issues that John raised at the top in the introduction on what is the goal of this work. Is it just to provide the correct number of facts and figures or is there a theory at work that is based on incorrect facts and figures that's driving trends in political life in ways that are illiberal and worrying? It's a good case study for that reason because there is a lot of fine grade evidence, but depending on what you want fact-checking to do, it is better at some things than others.

As for labeling, since you asked about it, Justin, the two are connected the way we separated them. In our minds, fact-checking is about, can you get people to understand incorrect claims and later repeat the correction? For labeling, it's more, does the way that you portray that correction matter? The answer is, of course, yes, it does. In some ways, the answers are obvious. Bigger labels are more effective labels that clearly rebut information are more effective than labels that have an ambiguous claim of, "Oh, this is contested."

I think labeling also has a similar objective question in that labels can be really good for addressing, say, a member of your family who doesn't follow the news that much and is confused by what they see online and needs some guidepost for what's reliable and what is it. I think if your goal is to reduce the temperature of politics and cut off whole claims along more politically extreme segments of the population, labeling is less effective for that. We actually have courtesy of the Facebook oversight board some good internal platform data from Meta's labels, which is the more somebody interacted with claims related to vaccine hesitancy during the COVID-19 pandemic, the less often they clicked on the labels they saw. So the people who are most bought into the conspiracy are less, are least affected by these labels.

Justin Hendrix:

That brings us to your fifth case study, which is counter-messaging. You say counter-messaging is premised in the notion that evidence and logic aren't the only or even the primary basis of what people believe. Certainly, that seems very clear from the science and something that perhaps many will just regard as common sense on some level, and yet I think we've got a more fine-grained understanding of that.

I’ll also say too, just in talking to people who work on problems related to mis and disinformation, I hear a lot of them saying that they're moving towards a counter-messaging strategy because of a variety of reasons. One is that they find that the platforms are less responsive to their concerns and complaints when they do surface material, which is clearly false or otherwise problematic or maybe against the platform's policies. That's one reason.

Another reason is that it's really, I think, a soul-crushing thing to spend all of your days playing whack-a-mole. I think there's a sense coming back from some of these communities that spending your time talking about what the "other side", quote, unquote, is lying about or trying to frame as falsehood is maybe a less productive way of spending your precious moments communicating with people. Tell me about what you learned about counter-messaging as a strategy against disinformation.

Jon Bateman:

I'll offer a couple big takeaways for me. One is that this is an area where the theory behind this is so strong. There is so much psychological and communications research that indicates that if you want to persuade somebody of something, it's not just about facts and evidence, it's, while being truthful in our case, it's about storytelling and narrative and psychological framing. So that's almost beyond question, like you said, Justin.

Now, how do you evaluate specific examples of that? That then becomes much more difficult. Doing that subtle matching between ... We heard so much during COVID. We need credible messengers to reach vaccine hesitant communities, people who would be respected by them. Well, how do you do that and how do you measure your success? That's much, much more challenging. So it does seem to me that this is a very promising area, but also very labor-intensive and empirically difficult to confirm whether you have done that match successfully and come up with a narrative frame that's effective to people.

The other thing I would say about counter-messaging is it goes back to what are we trying to accomplish by opposing disinformation. Counter-messaging is the case study of all of these that is moving most away from the technocratic frame, and I think, Justin, when I'm hearing you, I'm hearing just this almost political awakening that some people in this community are having of maybe technocratic tools aren't the most powerful here or even the most appropriate tools, and that a lot of what we need to be doing is organizing political and social movements to speak to people about the issues and stories that we find powerful and important.

In other words, a return to politics rather than this pathologizing of political discourse, disinformation, misinformation, so that we just need to be aware that that's what's happening, and that's actually a helpful narrative shift in some ways that we're now no longer acting as technocrats to a degree. We're acting as political participants, but that's actually what democracy needs.

Justin Hendrix:

I want to talk about some of the, I think, more nuts and bolts things that you think need to happen. One in particular is cybersecurity for elections and campaigns. This seems like a clear one, and yet on some level, this is removed somewhat from the realm of what most people think of as counter-disinformation, typically, and yet you included here is perhaps one of the most important things that we ought to be looking at.

Jon Bateman:

Disinformation, to just tackle that and tackle nothing else is sometimes difficult. So we do end up being drawn into adjacent questions, like Dean was talking about earlier, polarization, trust. Within the elections context, we try to keep in mind two very specific things that are more connected to disinformation. One is a hack and leak operation, which actually could be truthful but is often done under a false flag or could have disinformation infused in it. The other is a hacking of an election system in which claims might be made about electoral integrity that aren't true. That could be seen as the disinformation hook as it were, but as we discovered in this case study, it's actually not hard in some ways to secure election systems in the sense that the cybersecurity field has a pretty strong sense of what works to protect systems. Nothing's guaranteed, but there's a lot of best practices that just aren't being implemented.

The challenge is twofold. One is that there's a lot of institutional and political and financial reasons why election systems and candidates don't have very good cybersecurity, so it's very difficult to shape those. Also, will people believe that the systems are secure? Because that's actually a completely separate question.

Justin Hendrix:

I want to spend a minute on the role of the state in countering foreign interference, and you focus on that in particular. This is also a contested area. I know Nina Jankowicz has a piece in Foreign Affairs arguing essentially that the US has backed away from its efforts to counter foreign disinformation in particular, and yet your report regards that as an important aspect or facet of what has to be done. I don't know. How do you assess the US government's performance with regard to countering especially covert or even overt efforts to sow disinformation in US political discourse?

Jon Bateman:

I don't want to overstate this. It's a piece of the puzzle. I think one of the points that Dean and I always make upfront is foreign disinformation, if you want to think about the universe of disinformation as your problem set, foreign disinformation is a very small portion of it. Most of the disinformation is domestically generated. It seems like that domestic disinformation is actually more widespread and probably more persuasive and impactful. That said, it's bad that foreign adversaries are trying to monkey in our information environment, and we have this whole edifice that could be postured against it.

So then attention in the national security community naturally wanders toward how we could be using these tools. Of course, we're thinking about sanctions, indictments, counter cyber operations. I guess our overall takeaway here is that these are helpful measures to the extent to which they can disrupt or add friction to the adversary activity. It's not going to create strategic deterrence. It's not going to cause it to stop, but, for example, in the 2022 midterms, Cyber Command said that it executed a cyber operation to disrupt the internet research agency in Russia. That to me strikes me as smart. You were trying to do a targeted takedown of disinformation infrastructure during a very sensitive time window.

Justin Hendrix:

The last two case studies focus on things that I more or less think about as infrastructural questions about how we've organized the internet right now around profit-driven platforms. So you address the intervention of reducing data collection and targeted ads, as well as changing recommendation algorithms. Let's talk about both of those. I see them as related in my own mind, but note that some of other groups that have worked on counter disinformation work vary at a grassroots level, groups like the Disinformation Defense League, which put out a set of policy proposals, for instance, and very much focused on this aspect of the ecosystem level. What are the incentives? What are the economic incentives that are driving the way that platforms operate, the way the internet operates? What would reducing data collection and targeted ads do to counter disinformation?

Dean Jackson:

The two case studies are related because, of course, the data that is collected is used to target more than just ads. It's what informs the algorithm that recommends all content on many different platforms. So we've separated them here because in some ways, first off, there are already examples of privacy legislation around the world and in the United States that tackle the data collection piece of this. The algorithmic piece of this is more difficult in part because it's more opaque, and in the United States because there are, I think, a number of thorny First Amendment questions that have to somehow be resolved, and we still don't really know how those will play out in practice.

For the data collection case study, I think some people might be surprised to know that we find this to be ... We know a fair bit now about how targeted ads influence politics, but we aren't that bullish on this as a counter disinformation intervention. I think there are really good reasons to support data privacy legislation, but as a counter disinformation intervention, I think this is one of the less compelling case studies. Part of the reason for that is because targeted advertising, the effectiveness of it, while it matters, it's easy to overstate.

If you think back to the 2016 election and then the furor over the Cambridge Analytica scandal, there was a lot of talk about the use of this technology as psychological manipulation of voters. Then we started to see studies come out, some of them from European elections that look at, "Okay. Well, how much do targeted ads actually move the needle on both perception of candidates and parties and then on actual vote patterns?" One of the lines that sticks out to me from one of the studies is that there's a marginal effect here. A targeted ad is a few percentage points more effective than an non-targeted ad, which you would not call manipulation really. It's still in the realm of persuasion. If you move sentiment, that doesn't always change people's votes. I can feel warmer about a candidate, but still yet more warm about another candidate regardless of what the targeted ad has said, and this is backed up by other work.

I would point to the recent study from UNC by Matt Perault on Generative AI and political advertising, where he also draws on a lot of this literature and says, "In general, we overestimate the effectiveness of any one piece of political persuasion, of any one political ad in part because there aren't that many persuadable voters anymore, but also those voters are hard to move. They're bombarded by ads from both sides all the time, and ads have an ephemeral effect. Most of them fade, and in the end, it's just difficult to say how effective they are and very easy to overestimate that effectiveness."

We also have examples from Europe with the GDPR where there have been attempts to sever the collection of this data and its use in targeted advertising. It doesn't seem to have slowed the disinformation flow in the way that people had hoped.

For the algorithmic piece, we know a lot less about this one, but I think it could be really effective. It's a high risk, high reward proposition. The theory being that the way that social media platforms often distribute information gives rise to many things that drive the distribution of disinformation. They privilege information. It's divisive, polarizing, claims are inflammatory, in some ways promote conflict and confrontation between individuals. One of the most powerful pieces of evidence, I think, from this is something that came out of the Facebook files where a number of European political parties approached Facebook and said, "Hey, changes to your algorithm have changed the way that voters react to information. We post on our platform and we're actually changing our party platforms, our actual stated policy goals because they want to see things that are more confrontational, more negative, more angry."

Whether or not that's the algorithm, these parties are reacting to the way that voters communicate with them on a platform because of a change of platform made. So there was at one point, at least, a very real impact from that. That's several years old. We don't know. There have probably been many dozens of algorithmic tweaks across many different platforms since then, and we don't have the visibility into those changes to evaluate them. That's why things like the data access provisions for researchers and the DSA are so important because these things are always shifting. They shift faster than policymakers can evaluate them, and researchers don't really have the visibility into it. They need to study it and answer or questions about the effect those algorithmic systems have on our politics, but given just the scale at which the platforms operate, the amount of news and information that travels across and their importance in the overall ecosystem, if there's a significant impact from them, changing that impact could be an incredibly effective intervention.

Justin Hendrix:

You finish with a side note on the rise of Generative AI. In some ways, looking at this report, stepping back from it, you could almost think of it as a marker in time, and this is what we know about problems of disinformation and solutions to them before AI. So you point out even that future historians may debate whether the rise of Generative AI turned out to be uniquely disturbing or just another continuation of a long digital revolution. We don't know which way it's going to go at the moment.

Talked to some people who imagine an internet that's dominated by artificial intelligence, maybe lots of artificial intelligence agents that are responding to various incentives and dispatched to lobby on my behalf and find information that suits my interests, other folks who think maybe AI will play less of a role in defining how we consume and relate to information in each other than others believe. I don't know. Which way do you think it's going to go, having now done this 130 page report? What does your imagination tell you?

Jon Bateman:

I'll give my own view, which, as you said, isn't necessarily speculative, but I think maybe the contrarian in me, if I can use that term, I'm drawn to bring us back toward a sensible counter-reaction to some of the hype. So we were just talking about micro targeting. One of the worries that people have about Generative AI and AI in general is that it will create this super targeted, super persuasive information. It's not clear that there's that unlimited possibilities for micro targeting such that if we could only feed more data into an algorithm, that algorithm could identify persuasive techniques so far beyond what humans and pre-existing AIs and big data and other data-driven tools have been able to come up with. That's just not clear to me.

The other argument that people make is that Generative AI offers cheap and easy ways to create tremendously realistic content, but as we've discussed throughout this podcast, realism or the just verisimilitude of information isn't always either a necessary or sufficient condition to get someone to believe something. We've had incredible numbers of people believe, for example, these stolen election claims, they're not being shown any deep fakes, they're just being shown either visual or written information that is mischaracterized, mislabeled. That's enough for them because of, they’re in those claims.

So I can't say it won't be a problem. I do worry about this liar's dividend and other kinds of second and third order effects, but also, society will react and respond. We may become used to the notion that the internet is filled with all of these bots. So for example, one of the ways that we might respond is that the future internet might have less anonymity or pseudonymity than exists today. That could then be a response to the disinformation problem. We want to know when we're talking to a real human. That could create other problems involving your freedom to act anonymously online. So it's very difficult to tell.

Dean Jackson:

I align myself very much with Jon's contrarian impulse around this, which is not to say I don't think it will be a big deal, but I think the ways in which we are thinking about it and the questions we're asking are in many ways the wrong ones. For a lot of these areas, for a lot of the problems addressed by the interventions we evaluate here, I think of Generative AI as a booster for something that's already. It democratizes the technology to more actors, so there are more people capable of manipulating video, for example. Video manipulation has been around a long time. It's not new per se. More people can do it more easily. Hyper-targeting, as John said, is pretty old hat at this point, but you can use that data analysis to turn around more versions of an app more quickly.

I think you run up against a limit sometimes on how effective that can be. There's this talking point that, I think it was Hyundai, uploaded 60,000 versions of one ad using generative AI, of course, to tailor it for different audience segments. What does 60,000 data points about a person really tell you? Can you think of 60,000 data points that are static and determinative of a way, first, who's going to receive a car ad? What they value in a vehicle is going to change probably day to day, week to week. So at some point, there's really diminishing returns from that type of hyper-targeting.

So for some of these questions, I think these are older problems that generative AI is adding to in a way. I think there are new problems we're missing. Some of the things I think are really interesting are like, what's going to happen to search? Google just laid off a bunch of people who work in search because they're going all in on generative AI for answering user queries. What does the internet look like when we change the basic way we try to get information from it? What happens to all of these news outlets that rely on advertising revenue from links from Google, from search referrals for the revenue? A lot of midsize outlets could die. That's a further effect on the information environment.

There's a way in which because it's an election year, it's a year of global elections, billions of people will vote this year. We're really focused on election-related harms. We're focused on deep fake video. We're focused on cyber attacks against election systems. That's appropriate, but it's such a slice of the problem. I think in many ways that slice of the problem is less revolutionary and more evolutionary. The revolutions will come in ways that we are not, I think, broadly focused on right now.

Justin Hendrix:

I'd just add one of the ones I think about is the extent to which the generative AI will be employed for content moderation, which could have even more of an effect on the information ecosystem than perhaps the application of fenerative AI to produce disinformation or even counter disinformation. So we'll see what it looks like when tech firms perhaps scale up their content moderation activities using those types of technologies.

Want to just give you an opportunity to give some final reflections. One of the things that I was thinking about this morning coming into this conversation was something I read about in the Washington Post last week, an article by Yvonne Wingett Sanchez, who was reporting from Arizona about poll workers and the preparations they're making for worst case scenarios, she says, including combat, coordinating active shooter drills for election workers, sending kits to county election offices that include tourniquets to stem bleeding, devices to barricade doors, and hammers to break glass windows. For me, that just really brought forward, again, this reality of what's going on, that there is a point at which false claims, when they reach a certain volume, can bring harm to people.

Jon Bateman:

Justin, that's a very powerful vignette, and I think speaking personally, I work on this issue in part because of my passion as a citizen for defending democracy, but also, I'm not optimistic about the state of our democracy, and so separating out my emotional feelings from the effort to try to be helpful in this context is a challenge sometimes. There may be countries outside of the United States that have a more healthy way of addressing disinformation, polarization. Maybe some of these tools can be picked up by someone and used somewhere, but it matters. This is real. This is the country that our children will be living in and our grandchildren. We need to do something about it while at the same time recognizing that what it is a very complicated and deeply rooted set of psychological, social, economic challenges.

What would cause someone to attack a poll place? My God. If you've reached that point, you have a sickness in society that needs to be addressed very multidimensional, and it's probably always been there to some degree. So just to say it's more than just technocratic solutionism, it's psychological, even some might say a spiritual problem that certain societies have that's going to be with us for a long time, and it's frightening.

Dean Jackson:

This issue also feels, in a way personal for me, just the offers of biographical information. I grew up in a small town in Southern Ohio in a fairly evangelical community. My grandparents were members of a very small church on the banks of the Stillwater River. So I grew up in a pretty conservative area. I went to college in Dayton, Ohio nearby. I remember in 2016 during the Republican Primary, Politico actually ran an article about the Montgomery County Republican Party and the Republican Primary there, and all of the first time Republican voters who were showing up to support Donald Trump.

I saw the way in which conservatism in America changed over my lifetime and the role that political rhetoric and the tone and topics covered by conservative media played in that. I saw the chain emails that my grandparents would board me. I heard the way people talk about politics. I saw the things they were sharing online even in the early days of social media, and it gave me, I think, a real appreciation for the importance that words play, the importance that things our leaders say, that journalists say, the things that people read and repeat to one another in the direction of our society, and I think the direction of that society has unquestionably become angrier, less trusting, more divided, more afraid, less empathetic.

I think that last one is especially key because so many of these problems, when I talk to people, it doesn't really come down to the nuts and bolts. It doesn't come down to numbers and figures. The policy details don't matter that much. When Americans perceive each other as enemies, when they think that the victory of the opposing political party is going to lead to the end of their way of life, when you have this high degree of effective polarization, and you've written on this, Justin, I really commend your report on this out of NYU, we cite it in the report, that's a tremendous problem. I don't underestimate the role of all these feedback loops in the information environment and driving that.

I guess the thing I'll say to end is you referenced my work for the January 6th committee. I think often back to a transcribed interview we did with Brian Fishman, who was head of dangerous organizations for Facebook. That document is public, go find it. It's professionally transcribed word for word, our conversation. He was a skeptic, I think, of the amount of blame you could place on any one platform for the insurrection, but he talked about the problem of increasingly heated political rhetoric coming from our political leaders, and that's not something that's new to President Trump.

I remember I was in graduate school when Gabby Giffords was shot in Arizona. I remember politicians saying, "We're going to reach for the bullet box that we can't solve our problems with the ballot box." We've been on this trajectory of rising temperatures for years, and I think that the incentives created in the media environment for politicians to speak to voters a certain way create appetite among voters for that rhetoric from politicians, which mean that politicians then have to ratchet up their rhetoric to be heard. I think it's a tremendously complicated and important problem, and in some ways, I do think it is a cornerstone problem of our democracy today.

Justin Hendrix:

Well, another live experiment ongoing in 2024 in this country and many others. We'll see what more evidence is produced and added to all of the empirical data on these questions that perhaps the two of you will analyze in future and come back and tell me about your findings. John and Dean, thank you so much for the work on this report and for speaking to me today.

Dean Jackson:

Thank you so much for having us, Justin.

Jon Bateman:

Thanks, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...
Dean Jackson
Dean Jackson is the principal behind Public Circle Research and Consulting and a specialist in democracy, media, and technology. Previously, he was an investigative analyst with the Select Committee to Investigate the January 6th Attack on the US Capitol and project manager of the Influence Operatio...

Topics