Home

Donate

Reconciling Social Media & Democracy: Joan Donovan and Robert Faris

Justin Hendrix / Oct 14, 2021

On October 7th, Tech Policy Press hosted a mini-conference, Reconciling Social Media and Democracy.

While various solutions to problems at the intersection of social media and democracy are under consideration, from regulation to antitrust action, some experts are enthusiastic about the opportunity to create a new social media ecosystem that relies less on centrally managed platforms like Facebook and more on decentralized, interoperable services and components. The first discussion at the event took on the notion of ‘middleware’ for content moderation. The second discussion looked at this question through the lens of what it might mean for the fight against misinformation. It featured:

  • Joan Donovan, Research Director of the Shorenstein Center on Media, Politics and Public Policy at Harvard's Kennedy School, and
  • Robert Faris, Senior Researcher at the Shorenstein Center on Media, Politics and Public Policy and an affiliate of the Berkman Klein Center.

Below is a rough transcript of the discussion.

Justin Hendrix:

We will move on now to our next conversation, which is going to touch more specifically on the problem of disinformation. So the two of you brought forward a different point of view on this general proposal of decentralization, middleware and unbundling in your piece, Quarantining Misinformation.

For those of you who don't know these folks, of course, Joan Donovan, Research Director of the Shorenstein Center on Media Politics, and Public Policy at Harvard’s Kennedy School, and Robert Faris is a senior researcher at the Shorenstein Center and an affiliate of Berkman Klein Center. So very, very grateful to have you both here. I think of both of you as people who have been not just thinking about this problem from a purely theoretical or academic perspective, but also have been working with people in the community and in civic organizations that have been deeply affected by these issues. And it's worth us thinking about that- maybe bringing this conversation a little bit towards the fact that while we're having a theoretical conversation, there's blood and bones at stake here with what happens on the ground in our communities.

Rob Faris:

I'm delighted to be here and in this rich conversation with so many smart people. I'm really heartened that so many great minds are looking at this problem. So I want to take a pretty broad look at this and say that the middleware idea is couched within the age-old question of what platforms ought to be. Should they be completely neutral platforms to carry all speech, or should they be closer to media entities that are actually responsible for what resides on their platforms? Right now, we're in a middle zone where they're kind of sort of responsible for the content on their platforms. And I think that a lot of the focus of the ire on platforms right now is pointed in different directions. I think from one side of the political spectrum, people are saying, "Platforms are blocking way too much content they need and should be compelled to be neutral platforms." The other side of the political conversation is saying, "No, in fact platforms are not doing a good enough job of policing their platforms and removing harmful content."

I think a lot of the conversation started today, I think one of the premises that Professor Fukuyama starts with is that, long term, having platforms that are not neutral is not tenable. I think a lot of the conclusions, the logic that follows that if you start from that premise, make a lot of sense. I'm not sure I'm ready to start from that premise and I think we ought to be asking that very question: would removing the current levels of content moderation and turning it over to middleware, would that improve the world that we live in? Another way to ask the question is if we were to turn the clock back until the fall of 2020, would we be happier with middleware providers that permitted the free dissemination of content that said the election is stolen and we have to resist this tyranny. For me, that would not be an improvement in the world and so I'm questioning the basic premise that platforms must be neutral as a normative element.

Another related point I'd like to make is that I think a lot of this conversation is taking issue with the current normative situation that we have, which is a private ordering-- it's media platforms doing as is their right to moderate content. And the question before us is given that there's a lot of lawful, but awful content that a lot of people think ought to be moderated, and there's a lot of lawful, but awful content that people don't believe should be moderated, that this is being left to private entities, and do we have a public intervention that we think could improve the state of the world? That, I think, is the debate we're having today. And to assume that right now, things are so dire that we have to do something. I kind of agree with that, but whatever it is we do we need to be pretty well convinced that what we're doing is actually going to improve the world around us.

There's much we agree on here. I think we agree that the platforms are not well all suited for bearing the degree of responsibility which is now currently in their laps. That point, I think, is very eloquently made by Siva Vaidyanathan long ago, and I tend to agree with that quite well. I also agree with Professor Fukuyama that we need to be thinking about institutional responses to this, which is like, how do all the various pieces fit together so we're pushing things in the right direction? Wholeheartedly agree with that. I'm not sure what the best institutional response is to that, and I think we ought to be leaning towards that. I think it would be a mistake to assume that there are not institutions that are trying to guide the social norms and these private decisions that are being made. There's a lot of social pressure, there's market pressure that is being born upon these companies and they're responding to that. Does that work great? Nah, it doesn't. The question is how can we improve upon that? And is there a government level intervention that would improve upon that? I'm not so convinced. I haven't heard it yet.

The middleware idea sounds pretty good in some respects, I think the idea of having more control, giving users more control over what they see and how their feeds are curated, that makes good sense to me. Another feature I like about them is the notion that there would be better attribution for content that people are seeing rather than having the amorphous algorithm, feeding people or Facebook... "I saw it on Facebook," without understanding who it was that thought this was good content that people ought to be looking at, I think that might be helpful.

To implement middleware to the detriment of current content moderation by companies? I am not there yet at all. So I would have to be convinced on that as well.

I think the last question which I want to turn over to Joan is, are we ready to admit that large scale algorithmic curation of content has failed, that the marketplace of ideas has failed? And if so, what the hell do we do about that? And that's where I am. Joan, please clean up the mess that I've left you. Thank you so much.

Joan Donovan:

Thanks Rob-- such a rosy picture. I really want to thank everybody, particularly the opening speakers for getting this kicked off and I'm a big fan of Mary Gray's work. Her book Ghost Work has really made me think a lot about where humans should be in the conversation here. I don't think humans should just be in the conversation about content moderators, but I think we should think more holistically about, as you were saying, Rob, the point of a platform and the point of all of this talk, and where have we traditionally gone in society to have those who organize knowledge be at the center of the conversation. And so I've often wanted to bring more librarians into the conversation here. Twitter's running a little bit of an experiment related to their trends and curators, and it's hired some information, specialist librarian types to start writing the descriptions and take a second look there.

We are not of course going to be able to moderate all the things-- that is not the point. It shouldn't be the point. As Daphne was saying, there's someone's cousin on Facebook and they're wrong some of the time? Sure. They should be allowed to be wrong some of the time. For me, it's one of, "Is it the content that is life and death?” If it is, we need some better curation. I think we should look to librarians for help with that. I think the middleware solution of adding technology on top of technology, creating another industry that's codependent upon these platforms and upon this data regime, it's going to bring more problems than it's worth. I'm obviously the most unpopular academic in this field, because I have argued against allowing Facebook to collect so much data on people so long as they share it with researchers.

I think that we do need some regulation that deals with data privacy in such a way so that if you are running a product, you shouldn't be able to collect more data than is needed to run the actual product. There should be some expiration of that data built in. And so I think that as we're thinking about this conversation, we should talk about where more people need to be put into the system where we need better and more accurate knowledge based curation systems, especially around the things like a pandemic or information that is life and death. And then finally I think that we have to take seriously the fact that each platform and company took a different road towards how they would either support or moderate political speech and if we had-- and this is the paradox of data, of course--- if we had access to the right kinds of data, we could potentially know a lot more or about how many people were exposed to which kinds of misinformation and how many people that misinformation incited, and to what degree then should we hold political figures accountable for that kind of behavior online that leads to in some cases, criminal activity.

Now that's a lot to contend with given the fact that people who talk about this stuff, sometimes don't look at it in its depth and details. And what we're dealing with here is of course, a problem bigger than Facebook, bigger than Twitter, even bigger than Google. It extends across all of these minor apps. It extends across the ones that we already know that are bad places that spew a lot of disinformation like Parlor or Gab or 4Chan or any of the versions of those message boards that help white supremacists get organized. The main stage platforms that we're talking about generally are just the delivery systems or the distribution systems or spaces where they go to harass people. But we do have a problem with other platforms like Twitch and Discord, et cetera. So that isn't a lot of solutions. But I do think that there are ways in which we could start to think about our information environment and the actors within it differently.

I also think that I'm in support of looking at technological design as a product and not just simply as magic and innovation, and seeing if there is a way to create some kind of regulation or policy options that enforce essentially what the platforms have already promised, which is that they're going to have terms of service and they're going to apply them evenly to different people. I think the whitelist that the whistleblower has given a wink and a nod to is a really important piece of the puzzle here. When we talk about who gets to behave how, and how people in power tend to be able to, one, reach the most people on these platforms and two, exploit it to their own political ends. I'll cut myself off there, Justin, because I think we're at our 15 minutes.

Justin Hendrix:

Yeah, absolutely. And I appreciate that and I will also just maybe start the conversation just a little bit of a back and forth between us here. Maybe we'll try to acknowledge any questions in the chat as well, but Rob you said ‘are we ready to acknowledge that the marketplace of ideas is broken?’ I just want to maybe push you on that a little bit- in your piece, you write in “our view, the evidence clearly shows the marketplace of ideas in the US is broken and that more speech, something which the internet has been wildly successful at producing has not been a remedy for bad speech.” So I don't know if I've put you in a box there, but do you think that that's right? Are we at that point where we can go ahead and draw a line under that?

Rob Faris:

I'm there-- I think that certainly the marketplace of ideas has not produced the outcome that we had hoped that it would. I think the idea of open discourse and discussion on the internet to resolve problems and to surface better ideas and sync bad ideas, I think in practice has not happened.

Justin Hendrix:

Joan, you mentioned this idea of librarians in your piece. You maybe address Greta's comment here that obviously librarians are overworked, underpaid. You actually imagine that maybe there's some carve out for thousands more to be hired and put into these roles. Can you imagine that as part of a public interest reform?

Joan Donovan:

Professor Fukuyama lays it out-- you’ve got to have short, middle and long games here. If the long game is to ensure that no company is able to dominate the information space in such a way that they can cut off any kind of communication across several major platforms, including WhatsApp and Instagram, then that's a long game, right? Doesn't mean it shouldn't be tried, but it should be happening in tandem. I've advocated for thinking through the taxonomies of content and users from an information science perspective for a while, but I'm not the only one doing this. Sarah Roberts who wrote Behind the Screen about content moderation. She coined the term commercial content moderation. She's been studying it for a decade now. She thinks that there's a different role here for people to play, as well as Safiya Noble. A portion of Algorithms of Oppression is devoted towards thinking about, "Well, what would search look like if librarians were in charge of content taxonomies?"

Because the big challenge in 2016, 2017, when everybody was yelling about fake news was how easy it was to game these systems… what's incredibly important to understand is things like Facebook and YouTube and Twitter are incredibly predictable. And that's why you see activists as far back as 2011, were able to gain algorithmic trending during the Egyptian revolt and Occupy Wall Street. You just needed to have about a thousand people at the same time doing the same thing. They built middleware, they built co-Tweet and they used Hootsuite in order to do these kinds of media manipulation, low tech hacking. Eventually brands start doing it then politicians and so we're 10 years into this moment where these products are built in such a way that it advantages media manipulators and disinformers because they're so easy to manipulate. If you can either push a crowd to do it or fake a crowd, there's excellent research out of... Shoot. I think it was in... Was it NYU that did the Follower Factory?

Justin Hendrix:

Oh, Follower Factory. You're thinking about Columbia, Mark Hansen, I think.

Joan Donovan:

Mark Hansen. Yeah. So sorry, but yeah, the follower factory model, right where you have this entire industry of fake accounts and fake engagements. So we're not reckoning with a system that is so pure in its design and in its motives that it hasn't given birth to an already dark middleware world- search engine optimization companies, social media experts. So what we're talking about, if we go the middleware route, is more institutionalization of some of those processes and products.

Justin Hendrix:

I want to maybe push you in and Rob a little bit on this because I'm thinking a little bit about your Disininfo Defense League you've been a part of and played a role in which now I think was over 200 organizations that are a loose consortium that think about disinformation problems together, try to apply theory into practice, lots of different groups, especially groups that are concerned about the effect of disinformation on Black people, people of color, LGBTQ, other minorities especially. People are organizing themselves to confront these problems. There are government entities that have we're aware, of course. In the Israeli-Palestinian conflict in this last round of violence, there was a real asymmetry between what the Israeli cyber defense unit was able to do in terms of flagging posts that it didn't quite like to Facebook versus the degree of organization on the Palestinian side to do anything similar, which may have contributed to an asymmetry in takedowns in that particular environment. So there's a civil-government middleware that's already working out there, and yet it's not really platformed in a way. It's all backdoor conversations, deals, soft power, who knows somebody at the platform who has the right email address to complain to. I don't know. Can you imagine any of that being mediated in a way or given tools in this future?

Joan Donovan:

Yeah. I mean the Disininfo Defense League was born out of the coalitions of folks that brought net neutrality to the table, that brought platform, accountability measures to the table. And anybody who participates in social movements knows that social media has changed everything about organizing. It's made things like petitions so automated that they're a completely useless tactic. Now it's made donations so easy to administer, but it's added this entire proliferation of fake organizations and grift that it's incredible if you start to look at how much fakery is out there. So social movements are not unaware of media manipulation, disinformation tactics. But the need for an organization that would bring people together so that they can learn these tactics and understand what's happening when they're trying to launch a campaign, but they see other people taking over either the hashtags or the search terms or attacking the activists…. So it's just the recognition. I think of this new terrain or this new kind of information war.

And social scientist Sasha Costanza-Chock wrote about this and her book, Design Justice, is a really good example of this, which is that social movements had a first mover advantage because they got online. Even the left, with indy media, had a good advantage over what it meant to do online media and like cover protests. But as time has worn on other kinds of provocateurs, other kinds of political agents, operatives, foreign entities have all figured out that these tactics work. And so I wasn't surprised during the whistleblower testimony to hear her mention the Chinese government and espionage using Facebook to track the Uyghur population. That's a big deal. That's a huge allegation to make against the platform, that another government is able to use it for surveillance. Does that mean that the Chinese government is wrong, or does that mean the platform is built so poorly to protect privacy that the technology itself is a liability to its users? Could be both, right? And then what it points us to obviously is also an indictment, I think, of things we know but cannot say, which is the way in which U.S. entities use social media, police and the FBI use social media to track activists and other folks across platforms.

Justin Hendrix:

Rob, I want to come back to you just for a second- you're one of the co-authors of this influential book, Network Propaganda. How does this connect maybe the more traditional media ecosystem, these ideas, can you imagine this world of proposed middleware middleware or decentralization as somehow working in conversation with the traditional media ecosystem, would it be a good or a bad thing to move in this direction based on the current media landscape we've got?

Rob Faris:

That's a great question. I think in many ways it mirrors the larger media ecosystem,s and it would just exacerbate the echo chambers, if anything. It would certainly acknowledge them, and I think acknowledge seeing them as helpful, but were we to have middleware providers, the only question is whether it's going to be Fox News, Breitbart or Infowars, which is having a greater sway over the content that's seen on social media platforms amongst conservative audiences. Might not matter all that much, but I think what it would do is it would reinforce the existing divisions within the media world that we see now.

Justin Hendrix

Joan, any perspective on that from your seat?

Joan Donovan:

Yeah, I mean, we've dealt with media consolidation in the past. We definitely have a different order of it within the sense that there is this divorced... the people that are distributing the content are not the exact people making the content, which is a little bit different, that's what section 230 was supposed to encourage, some of this proliferation of these distribution systems. But now we're in a situation where we do have several dominating news outlets, as well as Facebook in particular that is trying to hide just how powerful and how dominant these entities really are.

And I've read a lot about Andrew Breitbart, what his vision was for Breitbart and his work. And I do believe that one of his axioms that inspires a lot of the people that were part of Stop the Steal, I do really believe that ‘politics is downstream of culture,’ and I think that culture is actually downstream of infrastructure. And so as a result, I think that if we can deal with this as an infrastructure problem, we're dealing with something really different. I do not think that we should empower these companies to grow to such a massive scale on the rationale that they would then do a better job of content moderation because they're not, and they're huge. And so I think it's actually better to have many, many, many, many, many, many competitors where the damage that could be done by any one disinformer, media manipulator, it would just be much more expensive and resource heavy for them to try to do it across so many different platforms where people are getting their news and information and connecting with their friends.

And this is because I see the effects of deplatforming someone like Nick Fuentes or Ethan Ralph or Alex Jones is once they have to go to these second, third tier, fourth tier streaming platforms, they are just not that important. But the tools that are built into YouTube around monetizing, as well as the tools that are built into reminding people that so and so's going live at eight o'clock actually builds a much, much larger audience. Those audiences don't travel with them. They get cut in half almost instantaneously. And by the time they're at some cut rate app that barely works, they have very little influence. And you can think of Milo Yiannopoulos, for instance, once they get off of these big platforms, they're just not as dominant as they were when they had access to these bigger platforms. So I think ‘bigger is badder’ in this instance.

Justin Hendrix:

Well, I want to thank you both so much. I'm respectful of your time, we've run up on the hour. I'm very grateful to both of you for joining today. You've both complicated the picture, but that was the goal, but also made it richer with this discussion of mis and disinformation in particular. So thank you so much. Where can folks for you, Joan, and you Rob?

Joan Donovan:

You can go to mediamanipulation.org, that's where most of our writing is. Thank you Justin, really appreciate you inviting us.

Justin Hendrix:

Thank you.

Rob Faris:

Thank you both.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics