Home

Can social media help depolarize divided nations?

Justin Hendrix / Aug 11, 2021

There is an active debate about the conditions in which and the extent to which social media plays a role in contributing to polarization and division in society. Last month, I spoke to Jonathan Stray, a Visiting Scholar at the Center for Human Compatible AI at UC Berkeley, about his new paper, Designing Recommender Systems to Depolarize, which turns that debate on its head- asking whether social media platforms could play a role in reducing division. Jonathan’s paper “examines algorithmic depolarization interventions with the goal of conflict transformation: not suppressing or eliminating conflict but moving towards more constructive conflict.”

Below is a lightly edited transcript of our conversation. You can also listen below- and remember to subscribe with your favorite podcast service.

Justin Hendrix:

Tell me what you were setting out to do with this paper.

Jonathan Stray:

This is really the culmination of a train of thought that I've been following for three or four years on ‘what is this polarization thing anyway, and what do recommender systems, social media, news aggregators, all that stuff have to do with it? And how should we approach this problem?’

Justin Hendrix:

What could you tell the listener about the relationship between social media and polarization?

Jonathan Stray:

We should start with ‘what is polarization anyway’? And we all kind of know already, right? It's the experience of living in a divided society, but you can get a little more precise in a number of ways. So you could talk about polarization at the elite level- so congressional voting patterns. You could talk about issue position collapse. So why is it that if I know how you feel about climate change, I could probably say how you feel about abortion rights, right? Those things have no sort of logical connection and yet they sort of collapse down to one side.

Or you can talk about it in terms of this sort of feeling of dislike and distrust of the other side. And that's really the thing that a lot of scholars have zeroed in on recently- which is as someone put it to me recently, very bluntly: ‘let's face it, we hate each other’. So that's not healthy, and how do we move forward from there? I've been trying to ask a lot of people: what social media's role in that? I’ve really gone through everything I can find looking at that connection.

Justin Hendrix:

Folks are, in particular, interested in the most extreme kind of aspects of polarization. We just saw a violent insurrection at the U.S. Capitol on January 6th, we're seeing across the country right now these arguments about things like critical race theory, of course, and masking and the vaccine. How does this paper relate to those types of problems?

Jonathan Stray:

The main thing that I'm trying to do is pull in a bunch of thinking and ideas and practice from the peace building tradition. These are people that started sort of after World War II, after the creation of the UN and there was a lot of sort of optimism about peace. And so peace became a profession. It's gone through several iterations, and there's now a discipline, a way of thinking, a group of people whose job it is to try to help divided societies to move forward.

So it's interesting- even just what you just said, right? The insurrection on January 6th- a fair fraction of the country would not call it an insurrection. So even right there, we kind of have a problem, right? It's really important to a lot of people to call it an insurrection, it's really important to a lot of people to not call it insurrection. So how do we move forward? And what does not just justice, but peace look like in that case? And so I've been really trying to sort of go back to basics and answer these questions of ‘what do you do when there's violent conflict’?

Justin Hendrix:

So a listener might reasonably ask: ‘but there was insurrection so what do we do when there's a kind of disagreement entirely on what the fact pattern is?’ What the fact base in a particular circumstance, which seems to be so often the case these days?

Jonathan Stray:

Now we're sort of getting right to the heart of it. Before going into the details, let me back up slightly and say that I'm very influenced by a tradition called conflict transformation, as distinguished from conflict resolution. So conflict resolution is ‘we have to end this conflict’. Conflict transformation is ‘we have to change it in some way’. And one of the key differences is that conflict transformation sees conflict as normal, it's part of how societies change. I mean, you can think of democracy itself as a conflict transformation intervention- we're going to vote instead of hitting each other with sticks.

So then the question is, what's better conflict versus worse conflict? And a lot has been said about this as well. Sometimes you say constructive or destructive conflict, or healthy or unhealthy conflict. And one of the things you can say is that violence, physical violence is destructive conflict. So we're trying to find a way to move the conflict to a space where we're not fighting about it violently, but moving through it in some other way.

And I think interpersonally everybody's had this experience- the difference between a good argument and a bad argument. A bad argument, you felt like you came out worse- everybody got hurt, you went to the ugly place. Some words got said that you can't take back. A good argument, it's not about agreeing with someone, it's about understanding where you disagree and having both respect for yourself and respect for the other person.

So where is the actual disagreement? It can be quite hard to get into that kind of space because it's a very vulnerable space. There's this idea that if we don't push back hard against the other side, they're going to win- but I think that's an illusion as well. There's no winning this conflict. The idea that one side can just remove the other side from American politics when they're so evenly matched is ludicrous. So we need something else, and that's really what I'm trying to get at with some of the ideas in this work.

Justin Hendrix:

So I suppose it's maybe the right time to bring in the idea of social media and algorithms and the role that you think they could play in this. And maybe I'll back up once more, you look of course at the role of social media and potentially exacerbating problems of polarization and division, and you distinguish that from radicalization and extremism at some level. Is social media partly to blame for the problems we're facing today?

Jonathan Stray:

So this is the question everybody wants to answer, right? It's definitely involved. So I in the beginning of the paper I go through a whistle-stop tour of the available evidence, because I think it's really important that we talk about the empirical evidence for this idea of the relationship between social media polarization, radicalization, extremism, all this stuff.

For instance, the filter bubble idea has been very popular, very widely discussed, and it's very plausible. But it also seems to be false, right? There's a bunch of studies showing algorithmically-selected news doesn't seem to be any less ideologically diverse than human-selected news. People who get most news from social media are not more polarized than people who don't. Among people who use the internet more, polarization is actually increasing at a lower rate. Polarization in the U.S. is increasing fastest among older people who use the internet less. So that starts to lead to questions like, well, what about the rest of the media system, right? What about cable news?

And you can look across countries as well. Plenty of countries with advanced economies, with lots of social media use don't have increasing polarization. So there's something else going on, and the best evidence we have for what could be happening comes from two studies where they actually asked people to stop using Facebook. One was in the U.S. and I think it was immediately before the 2018 midterms, where the researchers ask people to stop using it for a month. And there they saw that an index of polarization measures decreased a little bit. So this was issue-position polarization, how far apart people were ideologically when you ask them ‘how do you feel about gay rights? How do you feel about abortion? How do you feel about gun control’?’

So that study has been used to argue that social media is causing polarization. In the American context, that may be true. But there was just another study that came out that was in Bosnia and Herzegovina where they asked people to stay off Facebook again during Genocide Remembrance Week. And what they found there was the people who were off Facebook were more polarized. And digging a little deeper into that, what they found is that's because people's offline social networks were very uniform.

So in this case, social media was actually providing diversity that they didn't get in their day-to-day lives. So there's some relation, because we can see these causal effects, but it's not unidirectional and it's not straightforward, it's not like, well, if we just shut off, Facebook or shut off all of them really- Facebook, Twitter, the whole gamut- that we would solve our political problem. I think that's actually a kind of a very wishful thinking, right? That's the opposite side of technology hype. If you believe the hype that technology is all powerful and can shape societies and has all of these solutions within it then you might also believe that the solution to society's problems is technical. And unfortunately it's not. This is where I try to go on the paper- even if it's not the primary cause or the main driver, it may still be a useful place to intervene in these dynamics because it's a powerful media system.

Justin Hendrix:

So before we move to that part, do you sort of think on some levels, social media is off the hook on the causation question or do you still regard there to be questions to be answered there?

Jonathan Stray:

I think the exact nature of the relationship is very interesting. But for me, it's maybe only an intellectually interesting question. I mean, what do you get if you find out the answer to that, right? I mean, do you want to sue them? Okay, you could in principle sue social media companies out of existence or regulate them out of existence, but I don't think you can erase the form, right? If all of these companies went bankrupt, we would have new ones tomorrow. You can't uninvent global social media.

Justin Hendrix:

Let's talk about the types of ideas that you address in terms of what social media might be used for to depolarize the conversation in United States. So you focus in particular on algorithmic systems, recommender systems and the types of technologies that are employed at scale on social media.

Jonathan Stray:

So to start with, I try to take at face value this idea of exposure diversity, right? I just sort of reeled off a bunch of experiments that suggest that it's not an exposure to diversity problem. But exposure to the other side does produce better understanding, warmer relations. I mean, in some sense, this is completely intuitive. There's also a really good meta analysis of- I think there's like 500 studies that came out a few years ago about intergroup contact. And, yes, intergroup contact reduces prejudice and increase understanding.

There's something fundamental about contact with the other side that is good, but is it good on social media and under what conditions? And in fact, there's a line of research suggesting that merely increasing exposure to the other side can make things worse. And this is the sort of ‘bad arguments scenario’. You're talking about something, someone always gets you into an argument rather than resolving something. So this was best summed up at a peace building conference I went to where someone was talking about online chat spaces for peace-building, and they said ‘unmediated chat polarizes’. It has to be the right kind of contact. So ‘what is that right kind of contact’ is the first avenue I go down in this work.

Justin Hendrix:

So how can algorithmic systems give us the right type of contact? What do you think companies like Facebook or Twitter should do? What should they build that would do that?

Jonathan Stray:

Well, so there's one study that showed that just asking people to subscribe to a counter ideological news source. So if you're a liberal, you add, I don't know, maybe Fox and the American Conservative or something and vice versa, right? I often just use a red and blue for the two sides here in part because I want to abstract a bit away from the details and talk about how conflict works in general, right? That's part of what I'm trying to say here is we know quite a lot about conflict in general. So if you're on blue, you subscribe to a couple of red news sources and if you're red you subscribe to a couple of blue news sources. And you do this for a month, what they saw experimentally was a one point drop in affective polarization.

For reference, affective polarization has been increasing by about half a point a year over the last few decades, right? So it's real, but it's not large. And so then the question is: well, okay, can you do better than that? Well, first of all, that's only experiments with news. Social media is much more than news, right? For much more than professional journalism. And how do you avoid backfire effects? One of the clearest pieces of reasoning or lines of research seems to be that tone matters, civility matters. It really is hard to listen to someone who is hostile. So if you follow that line of thinking then what we need is not merely ‘let's see more from the other side’, but ‘let's see the best that the other side has to offer’, ‘let's see the clearest, most civil arguments’.

It sort of sounds corny to tell everybody to calm down. But there's pretty good evidence that if you're going to contact the other side, you should contact people who are going to be nice about it. And you see this in mediation and peace building, right? You have to create a space with some rules for how you converse with each other. So bringing it back to algorithmic intervention, we already have toxicity classification models. They're used for bullying, hate speech, stuff like this. It wouldn't be very hard to repurpose them to say, ‘is this a reasonable argument? Is this civil?’ And say, okay, we're going to show you stuff from the other side- but only stuff from the other side that is civil in some way. If you are going to follow the filter bubble hypothesis, that's the tweak on it that I think you would need to actually make it work.

Justin Hendrix:

And do you feel that the systems that exist right now could make that possible? Is it something where the kind of, I don't know, computational linguistics or other types of approaches to discerning the semantics of the content are advanced enough to be able to tell whether an argument or a discussion is civil at scale?

Jonathan Stray:

I mean more or less, yes, right? You're always going to run into problems with sarcasm and memes and deep cultural meanings, right? So I'm sure you've had on the show people have discussed the challenges such as, how do you build an automated hate speech classifier? Can you even do that? And all of that is a problem- but I would say on average, yes, you can do this. And the technology is there. So, in particular, there are now techniques to look at not just the content of a single message, but the content of an entire conversation and the context of that conversation as well. Who are these people? Where are they in the social network? What types of other conversations have they been involved in? And so this is already being used for misinformation, hate speech, bullying, this type of stuff.

I don't know of any reason in principle why you couldn't use this and say, ‘ah, yeah, this was a good argument’, right? Versus ‘this is a bad argument’. And probably how you would end up doing this is you would have people go in and label these conversations as productive or unproductive. And so then you can imagine trying to build a rater’s guideline. ‘Look at this common thread and tell me if they were fighting well or fighting poorly.’ And obviously there's some subjectivity there, but again, I think I really have to appeal to our sort of personal, emotional sense of what's a good fight versus a bad fight- I think we all know, because I think we've all had both experiences.

Justin Hendrix:

So you point to three specific areas where algorithmically we could intervene on social platforms, can you describe those?

Jonathan Stray:

First, there is moderation- that's what content is on the platform. Then, ranking, which is a vaguely technical term, right? It's sort of the score that you give to each item and then you have all of us see the top ten items, but you can think of it as who sees what- it's also personalization. And then, user interface or presentation. What are the buttons that we have? What are the verbs that we can take? So your listeners may be familiar with a study that replaced the ‘like’ button for the ‘respect’ button and got more people clicking ‘respect’ on arguments that they disagreed with, but thought were solid points. So those are more or less the three places that you can intervene.

The challenge in going down this line is these are huge products, and there are many different types of products, right? And so you have to scope it somewhere. You could also start talking about, ‘well maybe Netflix shouldn't carry violent movies and so forth’, but for what I'm trying to talk about, that's a bit out of scope, so I'm talking about the algorithmic changes one could pursue.

Justin Hendrix:

You also point to research that suggests that these types of algorithmic changes over time could actually influence the behavior of users, what evidence is there for that?

Jonathan Stray:

Well, I mean, we know they do, right? The whole idea that you could intervene in social media to change politics is premised on the idea that social media influences behavior. But more concretely there's some fun work that's been done along this line. If you take a productive comment, a good comment, a civil comment, and you put that at the top of a comment thread after a news article, then the rest of the discussion is better as rated by human readers, but also as rated by the number of interventions that the moderators had to make. Or you put- this was an experiment on one of the science forums on Reddit- they put just a three-line notice at the top saying ‘here are the things that are appropriate for this thread or for this subreddit, we've got 1200 volunteer moderators to remove things’. And just adding that notice nearly halved the number of comments that moderators had to delete from the users. So we know that very simple interventions change the tone of subsequent discussion.

Justin Hendrix:

So if I'm, I don't know, maybe a kind of rabid-free speecher and critic of the social media companies and their intervention in our discourse more generally, I might say, ‘Jonathan wants an algorithmic nanny state! He wants to interfere in our God given right to tell each other to $%&# off! And why in the world would we want these platforms to social engineer our conversations?’

Jonathan Stray:

It's funny, I'm normally on the sort of other side of this argument- I mean, what we have in all of these problems is at tension between different values. I'm currently involved in other work that's actually trying to list what are the values that our recommender system should have. And so you have things like liberty and agency and all of this sort of individual-centered stuff, right? I should be able to read whatever I want, post whatever I want. And those are important, I agree with that. And the UN charter of human rights guarantees a right to self-expression. Interestingly, the UN charter of human rights said ‘regardless of political opinion’, that's an international human right you should be able to talk no matter your opinion on politics.

But we also have other values. We've got more societal level values, for example- we want our democracy to work. And that means a bunch of things- that means accurate information and it means constructive discourse. So my response to people who make this kind of nanny state or censorship argument is not that they're wrong, but that it's always a balance between different goals. And I have the same response to people largely in the present moment, sort of on the blue side, who are calling for much tighter controls and much more censorship- which is to say the things they're pointing at are real and they're not wrong, but that's not the only value. So we have to find some way of balancing all of the different things that we care about.

Justin Hendrix:

So there's another argument out there that says that the concern about polarization may be misplaced, particularly at this moment in America. That we've got severe democratic erosion- we just saw, for instance, a white supremacist insurrection at the Capitol. That people, particularly people of color, are being disadvantaged in this set of circumstances, and this isn't really a left/right, or a blue/red scenario. How does your thinking fit with that critique or challenge it?

Jonathan Stray:

I think all of that is basically true. Race is obviously a huge factor in American politics and racial justice is one of the major issues of not just our generation, but the generations preceding and the generations following, right? So it's ridiculous to just sort of sweep that under the rug. But I would say two things. The first is that we need not just justice, but peace, right? A functioning democracy requires both justice and peace. Peace is important for security, it's important for people to be able to express themselves. I mean, people are scared, people all across the political spectrum are scared to say what they think because politics has crept into every corner of society. And there's imminent violence, and we're destroying families, and we're destroying social capital, and we're destroying the ability of our government to function. That's dysfunctional, that's bad.

So what we want is a ‘just peace’. This is a concept that originally came from studies of how wars should end. It's not enough for people to stop shooting each other- you have to address the underlying injustices, or you're going to have a war again. And this is why I like conflict transformation as a practice, because it integrates theories of peace and justice. So that's the first thing. The other thing I would say is that is an excellent capsule summary of what one side in the argument would say- and like that's the blue side of the narrative, right? What's the red side? And it's not that these narratives are going to be equivalent or that the complaints of the red side are going to trump the requirements of racial justice or anything like that- it's just that something like a bit less than half of the people in this country have a different set of concerns, without passing judgment on whether those concerns are valid or invalid.

We're not going to get very far without being able to talk about what their concerns are in some way. And if you're having trouble articulating what those concerns are or what might be legitimate about them- again, setting aside the question of asymmetry- just ‘are there legitimate grievances on the right,’ then I would say that's a sign that you aren't talking to people on the right and you're disconnected from them. And that disconnection is going to make the progression towards not only peace, but also justice, much harder.

Justin Hendrix:

As I was reading the paper, one of the things that I kept asking myself is are these recommendations to the current platforms or are these recommendations to the platforms we wish we had? I don't know how you'd answer that. Do you see Facebook and Twitter and TikTok implementing these ideas? And do you believe they will? Or do you imagine that we have to have a new set of platforms that take these things into consideration?

Jonathan Stray:

I think the current platforms might implement something like this. I know that there are polarization research groups at a number of these organizations. So I know that they're thinking about it. One of the challenges here is that it's politically complicated for them to do anything. So until we as a society get our heads straight about how we think about these problems then there are counter incentives for them to work on them. Now I'm not saying they have no responsibility or they should be let off the hook, or they shouldn't do anything. But one of the best ways to move that forward is to come to some sort of external consensus on what the right things to do are because, as many people have pointed out, a small number of tech executives and engineers should not be making those choices. So we need to get clear.

I think the solutions that have been offered so far are a little thin. There are things like, ‘you should just ban all discussion that's past a particular point on the right on the political spectrum’. Basically it's the idea there is ‘just don't let these people talk’. And honestly, I don't think that's going to work, and not only do I think that's not going to work to improve the conflict, I am not sure that it's morally right to do either.

Justin Hendrix:

This week, the book by the Cecilia Kang and Sheera Frenkel was published- An Ugly Truth. I don't know if you've had a chance to look at it yet, but there was actually some mention of choices that Mark Zuckerberg took, for instance, around measures of conflict and discord on the site, and whether to tune those up or down after incendiary moments like at the trial of Derek Chauvin, the aftermath of the George Floyd protests or the 2020 election. And there is this anecdote about him essentially deciding not to turn it down past a certain point where it harmed engagement metrics on the platform. I don't know how you think about that, but when you look at the kind of reality of what goes on with the platforms today and what gets reported about how they deal with these issues, how does that comport to the thinking that went into this piece?

Jonathan Stray:

So engagement metrics. Let's do this one first. So first of all, we’re about five years past the point where any platform uses just engagement. All of them use dozens, if not hundreds of other signals, including some that are more socially oriented. I did a previous papers trying to catalog that. There's still the question of to what degree, right? So do you really see platforms trading off meaningful engagement for other things? It has happened. There's a Facebook earnings report where Zuckerberg reports that there was a 5% drop in time on site for their video product as a result of their meaningful social interactions metric. So it does happen to at least some degree. It's hard to know from the outside whether that's a meaningful amount or not, or how often these types of trade-offs for social good are being made.

I mean, everybody says this, but it's true, right? And we do need more transparency around that. And I've got some other work where we talk about what type of transparency would be meaningful for these platforms. And one of the ones I'm most interested in is- we've been sort of calling it ‘the court proceedings’, right? When a company makes changes to their ranking, what were they trying to do? What was the data in front of them? What were the motivations for their decision? And so to lay that out- it's public information, not just for transparency’s sake, but because- and this is going to your point about turning up and down the scope of, of polarization or extremism around tuning those filters in response to various current events, right?

The challenge there is nothing is free. So for example, suppose you care about information quality. Well, one way to get higher quality information is just narrow the filter so that only a very small number are very carefully vetted very established organizations make it through the filter, right? So what if Facebook only showed news from the top 10 journalism organizations? Well, there's a whole bunch of drawbacks to that. One is that most of those are going to be blue tribe media, and so all of the red tribe users are going to be upset about not being able to see their news source included for better or worse, right? Just because someone's upset doesn't mean they're right. But it's real when people get upset. And the other problem is that also sort of destroys the promise of the internet of allowing small voices to be amplified up into prominence, right?

If you eliminate all of the fringe outlets, you get rid of everything, the good and the bad. And the same thing applies to hate speech filters where you start getting false positives that start interfering with speech you might want. So nothing is free. And while we don't have enough information to comment on whether specific choices were right or wrong, I also think it's very easy to sort of armchair and say, ‘well, you should have turned this knob up at this time’ without knowing either what the knob actually did or what it costs to do that.

I feel like we didn't get to where I think depolarization interventions on platforms are ultimately going. So we talked about trying to uprank comments that are not only diverse, but also civil. But I actually have my doubts that's going to work. And I actually have my doubts that almost anything which has been proposed so far, any specific thing will work. And the reason I say that is because none of it's ever been tried at scale, at least as far as we know. And social media is big, right? I think it's very hard for many people to get their heads around just how big and how diverse- because remember, you're not just doing this in an American context, you're doing this in Azerbaijan, right? And the dynamics of ‘where does trusted information come from?’, or ‘Who you should listen to?’ or ‘What that conflict looks like’ are very different in Azerbaijan than they are in the U.S. So it's going to be hard to find a one size fits all solution.

However, I think the place to start is to measure polarization outcomes. So we're sort of flying blind here. If we care about polarization outcomes then we should be looking at them. And the managers making product decisions should be looking at them. And I think where this ultimately goes is we may ultimately be talking about building models that predict the polarizing or depolarizing effect, showing particular items of content or particular categories of content on specific people or in specific contexts, and then optimizing for those, right? So people are starting to talk about using well-being measures for reinforcement learning. Well, what about polarization measures as well? And that's kind of a scary thought, right? We're going to build these machines that try to learn about us not just as groups, but individually in some cases, and show us things to move us in a particular direction. And it's scary to think about and trusting that the machines.

On the other hand, there's nothing illegitimate, per se, about trying to influence people. I mean, that's what education is, that's what public health is. So it's a question of which types of interventions are legitimate. And also we might be forced to include polarization measures in the algorithms to prevent them from creating conflict as a side effect or exploiting conflict. And this leads to the idea in the conflict of the nature of a conflict entrepreneur. A conflict entrepreneur is- I mean, it sounds bad, right? But it's actually used as a value-free word, meaning someone who exploits conflict for gain. So that could be an activist who supports a cause you really care about, it could be a politician you don't support who's stoking sort of us versus them.

And the challenge is that appealing to conflict, appealing to divisions works: it increases civic engagement. And we see consistently that the most engaged people- we talk about, ‘we want civic engagement’- well, the easiest way to get civic engagement is to polarize people and push them towards extreme political positions, because then they're very frightened and they care a lot. So if we don't want our algorithms to learn as humans do that stoking division is an engagement strategy then we may actually be forced to include polarization measures in their operation. What you don't specify is left up to chance so we may maybe have to do this.

Justin Hendrix:

I think that comes back around to the thing I was kind of going on about with the idea of social engineering and the comfort level on that, the extent to which people will accept that. I mean, I guess on some level though, that it's already happening- it's just, as you say, left to chance, somewhat, or left to the profit motive or left to vagaries of how these systems are concocted and lots of engineering decisions that may not all be connected, or certainly aren't driven by these types concerns.

Jonathan Stray:

Yeah. I mean, look, there's no neutrality here, right? These systems are doing something. So if we don't specify what it is they should do then we shouldn't be surprised to find that they've done something that we don't like. Of course the catch there is, who's the ‘we’? And this goes back to all of the governance questions.

Justin Hendrix:

Well, many more questions to discuss in future. And I appreciate it very much, Jonathan Stray.

Jonathan Stray:

Thank you, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics