Home

Donate

Results of the January 6th Committee's Social Media Investigation

Justin Hendrix / Jan 6, 2023

Audio of this conversation is available via your favorite podcast service.

According to the legislation that established the January 6th Committee, the members were mandated to examine “how technology, including online platforms” such as Facebook, YouTube, Twitter, Parler, Reddit, Discord, TheDonald[.]win and others “may have factored into the motivation, organization, and execution” of the insurrection.

When the Committee issued subpoenas to the major platforms a year ago, Chairman Bennie Thompson (D-MS) said, “Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps—if any—social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence.”

In order to learn what came of this particular aspect of the Committee’s sprawling, 18 month investigation, in this episode I’m joined by four individuals who helped conduct it, including staffing the depositions of social media executives, message board operators, far-right online influencers, militia members, extremists and others that gave testimony to the Committee:

  • Meghan Conroy is the U.S. Research Fellow with the Digital Forensic Research Lab (DFRLab) and a co-founder of the Accelerationism Research Consortium (ARC), and was an Investigator with the Select Committee to Investigate the January 6th Attack on the U.S. Capitol.
  • Dean Jackson is Project Manager of the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace, and was formerly an Investigative Analyst with the Select Committee.
  • Alex Newhouse is the Deputy Director at the Center on Terrorism, Extremism, and Counterterrorism and the Director of Technical Research at the Accelerationism Research Consortium (ARC), and served as an Investigative Analyst for the Select Committee.
  • Jacob Glick is Policy Counsel at Georgetown’s Institute for Constitutional Advocacy and Protection, and served as an Investigative Counsel on the Select Committee.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

So, I'm speaking to you all about 48 hours after the committee has effectively dissolved with the beginning of the new Congress, and only a couple of weeks since the final report of the committee has been published, I want to start just with some basics, and try to give the listener a sense of your role in the investigation, what the purple team did, and how it fit into this broader scheme of looking at the context in which January 6 occurred. Jacob.

Jacob Glick:

Basically the purple team was envisioned as a way to fuse efforts of other teams together to create a broader context for how the insurrection occurred and the context in which it was able to foment. So, we did a lot of work with other teams. For example, the red team was focused on the rioters the day of, and how Proud Boys and Oath Keepers organized to get to DC on January 6, so we co-lead a lot of those depositions and asked Proud Boys and Oath Keepers broader questions about why they were motivated to become involved in extremist groups and support President Trump.

We also had similar involvement in some of gold team depositions and green team depositions, trying to provide them with more questions about the broader context beyond the schemes that define the report and define so much of the important work of the committee.

The one thing that purple team did take on by itself was the investigation into the role of social media platforms in the run-up to January 6, and that took two forms. I think Meghan and Alex are better situated to talk about some of the data analysis that purple team performed, to talk about trends and the impact of President Trump's behavior on internet traffic. And Dean and I, as well as Meghan and Alex also were involved in more traditional investigative work depositions, document production and the like, about various platforms ranging from Facebook to 8kun to talk through how those platforms had a role in the attack.

Justin Hendrix:

So, Meghan or Alex, could you speak a little bit to that other piece? What did that look like? The data analysis, the network analysis piece of it.

Meghan Conroy:

I can jump in to start. I was actually originally hired to do the job that Dean ended up doing. So, the purple team was an ever-evolving enigma within the committee, and we had to adapt and conquer as the committee's needs shifted and as the investigation progressed. So, essentially when I started, we realized there was suddenly a gap in the committee's social media data analysis capabilities. And I stepped in to fill it, and then I brought Alex on to help me execute some of the projects that the committee wanted done.

So, those projects ranged from finding social media posts that people weren't sure if they had been archived or disappeared or deleted or anything like that. Definitely were sent on some fun hunting expeditions. And then, of course, there was the data analysis piece which involved anything from, like I mentioned earlier, talking about the frequency of posts, understanding the longevity of different narratives online, where these narratives popped up, where they ended up and looking at their transition into the mainstream in a lot of these cases from more fringe sites and fringe areas.

So, like Jacob said, we looked at everything from mainstream platforms like YouTube, and Facebook, and Twitter, all the way to alt tech platforms like Gab and then fringe platforms like 8kun and 4chan.

Justin Hendrix:

In his forward to the January 6 report, Chairman Benny Thompson noted that the committee “pulled back the curtain at certain major social media companies to determine if their policies and protocols were up to the challenge when the president spread a message of violence and his supporters began to plan and coordinate their descent on Washington.”

But let's get one thing out of the way up front. It would appear that much of your work product did not make it into the final report. After that statement from Chairman Thompson, there's a notable absence of a particular section or chapter or appendix focused on it. And then ultimately, I suppose I'm aware there's this 120-page report on this component of the investigation that Rolling Stone reported on that is not included in the underlying documents released by the committee. Can one of you take that on? What do you think is the reason that that material has been essentially left out of these final materials?

Dean Jackson:

I'm happy to give a first general answer to that. And then, some of my colleagues who were there at the final stages vetting the report can, I think elaborate, but I think we all agree that the committee really made a decision, and it's their decision to tell a story that mostly focused on the former president. It felt it was very important to say something to the American public that they could really grip and understand.

And in doing so, a lot of material that was considered supplemental like that memo were not included in the report. And I think what's really important is that we... and the reason I think all of us are here today talking to you, and why we're so eager to get into this is that it's important that the conclusion not be that there was no role that social media played or that the companies were exonerated by our investigation or that somehow the threat of a January 6 style attack or an American demagogue who tramples on our democracy is somehow gone now that Trump is out of office, and now that his political star may be declining. We'll see how the presidential election in 24 goes, but certainly he seems to be past the zenith of his political power.

We want to really emphasize that actually, we did pull back the curtain at those companies and what we saw was not comforting. And if anything trends since then make us more concerned, right? Because as the tech companies diminish in size and power, which is something a lot of antitrust advocates have been calling for a long time- and a great irony is that they have fewer resources to keep their platforms safe. There are also now more platforms, greater variety of platforms where extremists can organize and coordinate, and where researchers like Meghan and Alex have to look to understand what's happening online.

And of course, we've seen since January 6, other incidents of political violence and a continued division and the continuance of the narratives that led to the January 6 attack about the 2020 election, continued lack of faith and trust in our political institutions. And when you add all of that together, we're still in quite a dangerous place, even though the urgency of the danger, that sense of urgency may have passed, and that would be, to think that we're not in a dangerous place or that we're in a less dangerous place based on the conclusions of the report would be a mistake.

Jacob Glick:

If I could just add, I think that one thing to remember is that it's really important that the committee was composed of nine really different members from vastly disparate ideological backgrounds who really wanted to come together to create a joint narrative, to tell the American people how to defend our democracy and how best to fortify our freedoms from a very specific threat. And obviously, the report did that extremely well. The entire work of the committee, which we're all really proud to have been a part of, did that surpassingly well in a way that we've never seen before in American history. And I think when you're looking through it in that lens, work under an intense time pressure to create consensus around these issues that are really complex, there was a lot left on the cutting room floor from all different parts of the investigation. So, there are parts of so-called gold team depositions that were not used even though they're really relevant to the broader threat that Trump and his allies posed in trying to subvert democracy.

There is a lot of material about our extremism investigation that didn't fit into this narrative that all nine members could agree on to make public in an unprecedented way. And the same thing goes for things about law enforcement failures, and fundraising, and social media investigations as well.

So, one thing that I would urge you to remember is that there are a lot of pieces of the social media investigation that are embedded into our report and into the underlying documents that have been released so far as this gov.info treasure trove keeps getting populated. And so, while of course, I think it would have been nice to see large portions of those documents from all the different parts of the investigation become released, we needed to anchor that in a cohesive narrative that all the members could agree on. And so, now, it's up to us to keep parsing through what was released and lift that up to the American public as well.

Justin Hendrix:

There is a lot of material, of course, the things that you're referencing, depositions from social media executives, some documents that were produced by the companies. There's a lot of material of course, as you say, sprinkled into the depositions of the extremists and others that you interviewed. And I mean, I suppose it's worth pointing out that a vast amount of the evidence that is referenced in the report for the actions of individuals and groups of course is drawn from an analysis of social media and from content that was posted on social media sites. I wouldn't even know where to begin, but let's talk a little bit about maybe just some big picture items.

The Rolling Stone report on your investigation included this, I guess summary note that “the sheer scale of Republican post-election rage had paralyzed decision-makers, particularly at Twitter and Facebook” who Rolling Stone says, you concluded feared political reprisals if they took strong action. That is something that comes through really in especially the Twitter depositions, but also from Facebook as well. Can you speak to that dynamic a little bit? What was going on perhaps inside these two major platforms? We'll start there.

Dean Jackson:

Sure. Justin, I'm very familiar with the line that you quoted from the Rolling Stone report, and I think there are two stories that really float to the top of my mind to illustrate the dynamic that you just called out. One comes from the transcript with Brian Fishman, who was the head of dangerous organization's policy at Facebook at the time of the January 6 attack. Dangerous organization's policy is an area that handles... I think was initially set to handle terrorist groups, formal terrorist groups like ISIS, but had to metastasize over time into other areas with things like more amorphous networks, things that don't necessarily have a structured leadership or a well-defined membership or anything that like that. And so, he was responsible for quite a lot of activity, looking for things that could really turn into offline real world harm. And what he told us was, first off that he advocated for a stronger response to the spread of the Stop the Steal campaign across Facebook.

There was analysis at the time that a lot of lieutenant groups were very active in those groups. A report that was leaked by Francis Haugen after the attack that looked at the role Stop the Steal played on Facebook, found that a very, very small percent of users in those groups were responsible for most of the growth. The groups primarily grew by invitation, not by algorithm. And just the 0.3% of users in those groups were responsible for 30% of the invitations.

So, there was a good amount of organic growth, but it was there. Something you told us was that movements have organizers and Stop to Steal was really organized on Facebook. And so, he said, this is concerning. I'm concerned that this movement... There's a lot of violent activity, a lot of violent speech in these groups, and I'm worried about this. But he also told us if Facebook had acted, if they had say passed a policy against delegitimizing the 2020 election, a policy against election denial, almost all of conservative media at the time was complicit in spreading these lies. And so, Facebook would have to take action, not just against Stop the Steal groups, but conservative talking heads, major conservative news sites, major social media influencers, political figures.

When fully half of the political spectrum is committed and willing to throw their weight behind a big lie like Trump's, it makes it very difficult, and really very easy for social media executives to give into the prevailing incentive to find a way not to act. It makes it difficult for them to act, much easier for them to find excuses not to.

At Twitter, you saw on the run-up to the election, and this is really well-documented in the transcript with the first Twitter witness we spoke to who was quoted at the hearing, at one of the hearings. They did a lot of work after the first presidential debate when President Trump told the Proud Boys to stand back and stand by and refused to disavow potential political violence by his supporters. On a policy about coded incitement to violence like implicit calls for violence. Phrases that indicate violence would be acceptable or might be imminent, but don't quite cross the line into an active threat with an identifiable time, place, and target, right?

And this policy was really controversial within Twitter, and they did a lot of research. They dug up hundreds of tweets that they said showed that first off, this problem was pervasive, but also that it could be defined and responded to. And they wrote a draft policy, which we were able to see. But when they took it to leadership cherry picked out from the data, well these phrases maybe one in particular” locked and loaded,” which came into prominence after the Kyle Rittenhouse shooting and said, well, these phrases... What if they referred to self-defense in the home? We wouldn't want to remove that speech. And so, the policy was struck down, but locked and loaded of course, was really one of several phrases they looked at. It only was responsible for us, a relatively small number of the tweets that they provided in their analysis. And of those, I would argue, having seen the list of all the tweets, the number that could be alluding to self-defense was also limited. It's not clear in every case that this was a self-defense claim.

So, you can see in some of these stories and in some of these documents and chat transcripts and other things, executives trying to find a way not to take strong action because they're so worried that if they do the weight of this anger over the election and the longstanding critiques of anti-conservative censorship by the companies will come falling down on their heads, and it takes something actually like January 6 to force them to take more dramatic action, at which point, of course, it's already too late.

Justin Hendrix:

I want to come back to the backlash because we're still living in the backlash to some extent, and we'll maybe get into that towards the end. But I want to maybe try to get you to comment slightly on this dynamic that you're talking about. I mean, it seems like when you read these depositions, certainly both from Brian Fishman, who you identified at Facebook, and then also, Anika Collier Navaroli at Twitter, there's this tension between what appears to be the policy to deal with content, of course, at the individual level or as a content instance, as different utterances of individuals, whether those are violative of policies or not. And then, looking at the harm of content collectively. So, network behavior, collective behavior, what Facebook ultimately, or one team task force inside Facebook referred to as adversarial harmful movement. Brian Fishman describes violence inducing conspiracy networks. So, this tension between thinking about social media posts as individual utterances versus collective cascading behaviors that may have real world impacts, that seems to really come through. Did you all grapple with that, or come to any conclusions about that tension?

Dean Jackson:

We talked about that a lot because you can see over time, especially at Facebook, but I'm sure other companies were also thinking about this, them struggle to adapt to what I think Meghan and Alex have called a post organizational extremist landscape. A lot of these policies, like I said, come out of the fight against ISIS and other terrorist groups. But when you start to get into more amorphous networks of conspiracy theories that still encourage and lead to offline violence. Those types of policies are not well suited to tackling that, right? No one pays membership dues to Qanon. And some of these policies were used eventually to ban QAnon groups from Facebook, and they continue to evolve and have lived under different names at different times, but they really reflect an attempt to get their hands around that problem.

And one way in which they were explained to me, which I think the thinking changed after January 6 was, how do you respond to a network that... Where its members aren't necessarily engaging in policy violations themselves, but the activity that they are organizing, the groups they set up, the communities they create, the conversations they encourage, lead to a statistically measurable increase in policy violations, right? Maybe I'm the moderator of a series of groups in which I'm not engaged in hate speech and when I see it, I take it down, but the level of hate speech in those groups is 50% higher than other parts of Facebook. How do you deal with a problem like that? And there wasn't a policy lever, I think, in place before January 6, or at least not a well-defined one that they were ready to grab, that would allow them to do so. And I think they have actually improved their thinking on this since then. But of course, just because a policy lever has been carved out and exists doesn't mean that the political will to use it will be there next time. And that I think remains a major weakness, and when we get into recommendations, that's why I think there's an urgent need for more transparency so that the decisions of tech executives can be more accountable to the public good. But I'd love to hear from some of my colleagues about how they see this policy issue, because it's a really important one.

Jacob Glick:

I was just going to add a very brief point on Twitter's coded incitement to violence policy and its relationship to President Trump in this context. And something that came across very clearly and very alarmingly in these depositions is that with both of our Twitter whistleblowers, what Twitter was seeing was violent responses to President Trump's posts. That includes his December 19th call to come to Washington. And also importantly posts he made after the attack had concluded that ultimately led to his suspension from Twitter. And so while some of those tweets in and of themselves, and this fact has been twisted by the Twitter files and Elon Musk, while some of those tweets themselves might not have been immediately violative, once you look at the context in which they're marinating and the responses they're soliciting from users across the country, then you see that there's a real problem with the potential for harm in the network.

And one issue that we saw over and over again throughout multiple platforms is that, big and small, is that they just simply weren't prepared to grapple with the idea of an American president using social media to provoke this kind of violent response. And so when you're thinking about an American president, obviously Justin, what you were pointing out about individual posts being analyzed in their own terms, that's really important when you're talking about the most important political figure in American public life. And these companies, I think, had a really difficult time shifting the paradigm to thinking about the president as a potential inspiration for domestic violent extremism as opposed to simply a political figure who should be afforded the most protection in terms of his ability to use any kind of public microphone to whatever end he or she would want. And so that comes across really strongly and really alarmingly in the Twitter depositions because there was a belated realization by Twitter that the president's account needs to be viewed as a potential nexus for violence.

Alex Newhouse:

The thing I wanted to add, too, is that the reason why the social media companies had such a hard time dealing with these kinds of things is because historically trust and safety teams were set up in such a way to deal with basically two different types of harmful behavior. Basically people using racial slurs and then people flying the ISIS flag. That's about it. And the mechanisms by which those trust and safety teams did that, traditional trust and safety has a queue of content that comes in and a set of often contracted lower paid content moderators who review each piece of content individually outside of context and makes a decision, yes or no, should we take this down or should we elevate it to additional review? That's basically it. And that's how trust and safety teams, and that's how social media teams did content moderation for years.

It's literally just until the last few years in which we see basically exclusively the big companies, Facebook, YouTube, Microsoft, et cetera, starting to implement more nuanced, more sophisticated detection and mitigation schemes. 2020 was the first time that Facebook actually publicized doing network take-downs of harmful content. One of the first ones of the violence inducing conspiracy network or whatever they call it, was the Boogaloo network in the summer of 2020. And then the next one, the second one was QANON in September, 2020. I think. It's still very rare that we see a company actually undertaking that sort of actor based, network-based approach to content moderation. It's become more of an emphasis after January 6th, but especially beforehand, the paradigms were still so over-emphasized on this queue-to-content moderator content-based approach. And that means that these companies, both due to political reasons and also due to just the mechanics of their teams of the way they do content moderation are incredibly inelastic. They have a really hard time being flexible and responding to very, very new manifestations of extremism, conspiracy theories, and violence.

Meghan Conroy:

You may have noticed that all three of my colleagues have been talking about content moderation. And I think the key focus of our investigation, or at least the key findings of our investigation revolved around content moderation. And I think that that may be at odds with what the public is focused on or what policy makers are focused on. I know Alex and I got so many questions from higher ups and the committee members included about algorithms. What's the role of algorithms? Do algorithms radicalize people? Just kind of this focus on that. And I think our investigation yielded what a lot of researchers already knew, but this is just kind of further evidence that content moderation should be the focus as far as policymaking. Because algorithms, for sure, they play a role. That's undeniable in terms of leading people to problematic content. But that wouldn't be an issue if the problematic content wasn't there to begin with.

And I think it's easy to assign blame to an algorithm because there's obviously a lack of transparency from the platforms about the algorithms that they use on their respective platforms and the roles that algorithms have in leading folks to certain content and dictating their user journey on a given platform. But I think we really should be zeroing in on content moderation rather than this elusive boogeyman that's hard to define. Because algorithms being bad is very black and white thinking. It's very easy to just assign blame to this hard to understand thing, whereas obviously, as we've talked about a lot today already, is content moderation is really hard. It's hard to define what is extremism, what is hate speech, what is a dog whistle, what's coded language? And staying up to date as extremist language and extremist talking points and the narratives that they use evolve very quickly. And for trust and safety teams to stay up to date on that and for policymakers to hold platforms to account objectively is going to be more difficult when it comes to something like content moderation that's a lot more subjective.

Justin Hendrix:

One of the things that comes through in Anika Collier Navaroli's testimony is this idea of judging the intent of a piece of content based on the reactions to it as opposed to simply the text that's in that particular utterance. And in Brian Fishman's– of course, you've already mentioned this– this sort of distinction between actor based policies and content-based policies. So clearly we're kind of at a tipping point on that in terms of the way that people perhaps are thinking about it. I want to ask another kind of big picture question, because it's something that's addressed in both the Twitter and the Facebook depositions in particular, which is, whether had anything been done differently in these companies prior to January 6th, would anything have been different on the ground at the Capitol? Can we tell that? That seems to really get to it, right? Had these companies performed better, had they followed their own policies, had they perhaps stricken QANON as a network from their sites prior, et cetera, would it have mattered on January 6th?

Jacob Glick:

I think it really would have. One thing that we saw in our extremism, depositions, depositions of Proud Boys, Oath Keepers and everyday rioters who came to the Capitol is how effective they were by what was going on social media. A lot of them didn't actually see the president's initial tweet, but they saw people reposting the tweet. And that's just one example. So the spread of this sort of incendiary rhetoric aimed at the heart of our constitutional process happened on social media. And Alex and Meghan can speak to that much better than I could. But one thing that we saw in the depositions of the Twitter whistleblowers in particular is that there was a failure to follow basic procedures for other high risk events across the world where there was a risk for political violence. And I think it was our second Twitter whistleblower who is remaining anonymous, and they were talking about how it would be typical for an event in another country where there was a contested transfer of power to have a special team in place to triage any violence that occurred on the ground.

And there was a resistance to setting that up in advance of January 6th. And both Twitter employees we spoke to talked about how instead of having a specialized team, there was basically a skeleton crew that was deleting tweets as if you and I were deleting tweets just searching through Twitter as if they were normal users. And that's pretty unacceptable from a perspective of a country that was seeing these events unfold in real time and were worried about its impact on our electoral system. So we can go through a number of examples. I think Twitter's refusal to implement a policy, Brian Fishman talked about in his deposition how Stop The Steal didn't necessarily meet the threshold of some of the coordinated harmful networks like QANON did. And while it's true they weren't endorsing violence as openly as QANON, Facebook didn't have an election de-legitimization policy and they didn't want to put that in place. And that's Facebook's choice.

It goes back to what we've been talking about, about the center of political gravity shifting more towards anti-democratic extremes. So you have to accommodate that on major social media networks. But if Twitter and Facebook had, I think you could also group in YouTube and Reddit, which is mentioned in the report, you can talk about a lot of these policies and these platforms refusing to see what was in front of them. And then you fast forward to Twitter employees kind of deleting tweets about breaching the capital and tweets with the hashtag "execute Mike Pence" in real time. And we know that affected the crowd. And it's not just President Trump's tweets that were affecting the crowd. It was everyone's.

Meghan Conroy:

And just bouncing off of that, I think to bring it back to Trump a little bit before we talk about the spread of that, the nature of that sentiment… Trump is really good at demonizing his enemies and convincing his followers that they're facing the end of America as they know it, the end of their lifestyles as they know it. And Alex mentioned that blood thirsty sentiment that was all over social media, and I think that's ultimately it. Trump was able to spread his views, or spread his beliefs. This notion that the world is going to come to an end, America is going to come to an end if Democrats... if there is a peaceful transition of power and Joe Biden assumes the presidency. And he spread that to millions of people both via legacy media and new media like social media just by being himself and talking a lot and tweeting a lot.

And the social media platforms let that happen. They let that spread. So by the time January 6th came to pass, users were being absolutely bombarded with narratives surrounding this allegedly stolen election, the notion that there's an imminent civil war, an imminent revolution, and reminders that President Trump had summoned them to the Capitol with his December 19th tweet. And that was, I mean Alex and I saw that across every platform that we looked at. It wasn't just ones with content moderation policies that weren't good enough or with no content moderation policy like some of the Alt-tech and fringe platforms. It was across the board. So whatever platform you were on, if you were in those circles even a little bit, you were seeing this content again and again and again. And I think Alex can build on that a bit as well.

Alex Newhouse:

So one of the things that I often emphasize should be the goal of content moderation, it sometimes isn't, but should be in my opinion, especially at the mainstream side, is to basically what I call reduce the radicalization surface, reduce the mobilization surface. So decrease the sheer number of people who could potentially be exposed to content that could radicalize and mobilize people to violence. And I think one of the things that we saw is that essentially none of the big social media companies, and certainly not any of the alternative social platforms had any sort of mechanism in place to try to do that in the particular manifestation that was occurring during the Stop the Steal movement. And so what we saw, for example, just to pick one specific case, Meghan and I did a ton of data analysis and archival work on Donald Trump's December 19th tweet saying "Be there. Will be wild.” Calling everyone to DC. That was posted on Twitter.

What we then saw is that there were a bunch of power user influencer types picking up that tweet, picking up "Be there. Will be wild" and spinning it and making it more violent, making it increasingly insightful and blasting that out to huge audiences. This happened on Facebook, this happened on YouTube, this happened on InfoWars, this happened on traditional media, like traditional broadcast channels. It happened on alternative social media platforms. And then from there, all those audiences then picked up those spins, picked up those narratives, presented to them by the influencers, and then took it in their own directions and started basically becoming this pressure cooker of hyper violence of this obsession with getting some revenge on politicians, doing bodily harm to Democrat politicians on January 6th, based ultimately on the "Be there. Will be wild" tweet. So any step along that way, any point in that chain, there could have been additional action taken. The Trump tweet could have been taken down faster. That would've disrupted the influencers from picking it up and spinning it. The influencers could have been disrupted because they were spinning it pretty aggressively and explicitly. And then the big audiences also could have been disrupted better themselves. Any step along that way, and I think you end up with a much, much less dangerous situation on January 6th itself.

Jacob Glick:

No, I think to build on that, there's one counter example that's really interesting and important to look at. And it's in the report. One thing we found in investigating Discord is that there was a particular forum on Discord, I think it was called Donald's Army that exploded with content after December 19th. And we had a briefing with Discord. That memo was released currently in the archives of the committee. And we also had some internal documents from Discord that we've also posted online talking about the evaluation of that forum called Donald's Army. And in the hours after the December 19th tweet, as Alex was talking about, that forum exploded and Discord took action to ban it a few hours later, because people were talking about traveling to DC. People were talking about DC's gun laws. They were making sort of little pods to coordinate in different parts of the country. And Discord, which has lots of flaws in dealing with extremist content to say the least, maybe knew it had gotten burned enough that they saw this and saw it was a clear enough sign of things going badly, that they, at least for this one forum, decided to take it down and suspend it.

And you didn't see that on Twitter. Because we actually talked about with both of our former Twitter employees that we spoke with, they tried to do something similar at Twitter and they were refused by their supervisors about that particular tweet. And so going back to what everyone else has said, we did uncover situations in which social media companies acted, and it's an affirmative omission by Twitter and Facebook to not do this kind of thing because there are other ways out, which would've been deleting the tweet.

Justin Hendrix:

So I want to ask about another kind of nexus, I suppose, between social media and law enforcement. Because one of the things that has become apparent in the month since January 6th, 2021, is that some of the social media platforms sent specific warnings to the FBI. Parler, for instance, made much of the fact that it had sent specific warnings, even though its former CEO largely pleaded the 5th in the deposition to you. They made that apparent in an earlier House oversight committee request that they had in fact sent I think as many as 50 concerning posts to the FBI in the days leading up. We know that Facebook also sent specific warnings to the FBI. And it appears in the deposition of Jodi Williams, who owned the domain for the donald[.]win that– maybe not specific to January 6th– but that he had personally warned the FBI about dangerous content on his site as well. What's going on here? What did you learn about the extent to which the FBI is paying any attention to these types of warnings or how it was assessing the threat based on social media?

Jacob Glick:

While we are not best situated to talk about some of the FBI's internal evaluation of the threat, what I can say is that it was striking going through our Parler productions to see how this website that bonds itself on freedom of speech and sort of owning the Libs was sending messages. One message on January 2nd had a hiring Parler employee saying to the FBI, "I'm really worried about Wednesday." And that to me leaves, Wednesday being January 6th. And that to me leaves a lot of disturbing questions about the FBI's willingness to look at the threat posed by President Trump's followers, for lack of a better term, this anti democratic far right movement as something that's coordinated and not a lone wolf phenomenon. And we saw that again and again, where even when they're getting calls from inside the House, the FBI wasn't willing to see this threat as something more than a one-off. And obviously the committee's released a lot of materials about the sort of scarcity of urgent threat messages from various federal law enforcement. And it looks even worse when you see someone like Jodi Williams is telling the FBI that things are amiss on his site.

Meghan Conroy:

I think it's also worth noting that a lot of the folks who ended up showing up on January 6th and actually doing violence and doing acts of violence, they didn't necessarily post online ahead of time. Some of them did and said things like, "I'll be there. Trump called me there", et cetera. But a lot of them weren't making explicit threats. They'd say things like, "There's going to be Civil War", et cetera. But a lot of the folks who were posting online in the lead up to January 6th, it's hard to differentiate between shit posting and actual threats. And I think that's obviously something that law enforcement has always struggled with and I think increasingly has struggled with in terms of really valuing the right wing extremist threat to the degree it should. And moving on from the kind of lone actor bias that Jacob mentioned, where they don't see the network. They see these so-called lone wolves, which obviously Alex can pop off on that. He has a lot of strong feelings. But I think that's ultimately it is the differentiating between the shit posting and legitimate threats is difficult.

But at the same time, I mean again, that was obviously a different team's investigation in terms of the federal failures and law enforcement failures and lack of, or unwillingness or inability to see the threat for what it was. But in terms of the sheer volume of posts that our team identified and unearthed, and there was... I don't... I'm wondering what I can say here. I think it is impossible for people to say that they didn't know what was coming. I think there were plenty of analysts, especially folks like Alex and I, who before January 6th were doing this kind of work monitoring far right extremist activity online and also just monitoring mainstream right wing actors and what they were saying. And folks were engaging in stochastic terrorism, and even on Fox News. And Alex and I made a case for that in our memos for the committee. And so I think that ultimately the threat was there. Plenty of people who were paying attention to these right wing spaces knew that there was going to be violence that day just because the sheer volume of posts and the blood thirstiness and this convergence around a shared cause, and that shared cause was Donald Trump and preventing the peaceful transition of power. So I'm not really sure what an excuse could be for not seeing the threat for what it was.

Justin Hendrix:

And of course, Christopher Wray had testified that that was one of the problems, separating the wheat from the chaff, being able to pick out the signal from the vast amount of noise on social media. But I'm talking about something much more specific, which is specific warnings from employees at these platforms. This is violent content. There is violence. There are specific threats being planned, which for some reason appears not to have been addressed. But Alex, do you have anything to add on this point?

Dean Jackson:

I guess I'd just say, Justin, that something that struck me during the investigation is how much effort and how many resources are dedicated to sort of intelligence activity within these companies. I mean, especially the larger platforms have internal teams that look for not just extremist activity on their site, not doing content moderation, but going off platform and trying to figure out, well, are bad actors organizing in ways we need to be aware of? They hire consultants to do this, they get bulletins from law enforcement and they communicate back to law enforcement. And so there was this very routine, I think, monitoring happening not just of, well, are we seeing a lot of violent posts or violent tweets, but in chat rooms on other services. Are we seeing an uptick and activity? Are we seeing mobilization and organization? Are protests around the country getting more or less violent? And that activity continued after the election and into the sixth.

And so in a way that I think really mirrored what law enforcement also is doing, and there's a sort of mirror parallel failure, I think to somehow take all that intelligence and monitoring and then make good decisions to be prepared for the worst case scenario, which is of course what came to happen. And I guess I just wrap up by saying it was really amazing to me when Elon Musk released all of these communications from inside Twitter and journalists ran with them and sort of under the label of the Twitter files to see these headlines, these breathless headlines like social media companies communicated with law enforcement agencies in advance of January 6th.

Well, of course they do. They've been looking for terrorist content for years. These communications channels were set up, some of them after the 2016 election and Russian interference, it's routine, it's very normal for them to have been communicating about that. And in a way healthy, because the internet is a fragmented space in which if you only look at one platform, you're never going to see the full picture of these kinds of threats. You have to be looking across a range of spaces, including offline, to know what's really happening in society and where the threats are.

Jacob Glick:

One other thing that occurs to me is how many of the platforms talk to us about their response to the election and documents we received briefings, we received depositions, we conducted focused on their largely successful response to the election, how attentive law enforcement was leading up to election day and November sometimes because of the 2016 Russian interference campaign. And that was hanging over all the social media companies heads. And that's clear from certainly Brian Fishman's transcript as well as some of the conversations we had with Twitter. But once the election was over and there was not a lot of violence on election day, I think there was this drift towards normalcy, this desire to see the moment of crisis as over and so we could talk more about Facebook's decisions to disband a lot of these policies that were emergency put in place and Twitter's unwillingness to see the crisis as sort of a continuing worsening dynamic.

And that goes to a broader conversation that hopefully the committee's report and the committee's work has helped to alleviate. But that conversation is about whether or not we can talk about American democracy in terms that we're used to with election day being the pivot point. And then we can all go back to work. And so we should not, in future election cycles, have these crisis communication centers to facilitate communication between social media companies and law enforcement. We should not have those on election day and then disband them after the fact that there's a continuing campaign to legitimize the election. That's something that needs to be in place until inauguration day. And that's not at all what we were seeing from these social media companies and certainly not what we were seeing from law enforcement. And that should change.

Justin Hendrix:

I think we could perhaps all agree that maybe these processes need to be more transparent. We need to understand these relationships between law enforcement and the social media platforms better.

Jacob Glick:

Definitely. And I think we need to understand what the criteria are for coordinated conversations and when a one-off is justified, because in my mind, if there's credible allegations about an attack on the capitol, not only should the FBI be having a more robust response, they should be demanding the full extent of their partnership with these social media companies. And it's not clear to me that that was happening.

Justin Hendrix:

I want to just ask you a few, maybe slightly inside baseball questions about the extent to which these companies cooperated with your investigation. I mean, clearly there were subpoenas that were sent almost a year ago, and there is evidence in the depositions that a lot of documents were produced, materials, policies and including some retrospectives that in particular it looks like Facebook and Twitter conducted to look at their own, essentially performance, around the 2020 election, and afterwards. How would you characterize perhaps those major platforms? And in particular, I don't want to leave out Google and YouTube for which there doesn't appear to be any documentation of their participation in your investigation. Does anybody want to take that one on?

Dean Jackson:

I can start, and I know Jacob will have a lot of thoughts on this and Meghan and Alex, I would appreciate your insights too. But since I did handle those three big companies for a particularly high intensity part of the investigation, let's say before the hearing started, I don't want to get too much into conversations with lawyers, but I'll say generally that their strategies really differed. I mean, we did receive tens of thousands of pages of documents, which we had to review and sort and catalog in part just to figure out what was useful and what was not. We got a lot of, one company in particular sent us quite a bit of junk mail. I know what email newsletters their executives subscribed to. And this was not especially useful information. And so that we had to sort through and just set aside, that's one tactic to flood us with information to slow down the investigation.

But some of the material was really useful. I mean, some companies did send documents that were sensitive, internal, detailed, sometimes so detailed and so inside baseball even, we needed more expertise to navigate them. And their full significance only became clear a month later when you'd read so many other documents that you were starting to be able to put together the pieces for these various email chains and memoranda.

Some companies took a pretty stonewalling approach. They argued that congressional inquiries into their content moderation decisions were a violation of their sort of first amendment to make those decisions unimpeded from government intimidation. There were court cases that indicate really that the January 6th committee had a uniquely weighty interest in understanding those decisions. And I'm not sure really that those companies' stance would pass constitutional muster, but it was their stance. Then other companies were compliant, less forthcoming. We didn't get as much detail or data from them. And it's not always clear that they preserved the documents or data that would have been responding to our questions. And so while they were very friendly and willing to make staff available for questioning and we did have an ongoing dialogue with them, the amount of material that we received was not always satisfying. And so you saw really a range of responses, but among the big companies, it's not as if any of them refused to play ball. They just had very different playbooks.

Jacob Glick:

And I think of the dozen and a half social media companies that we investigated, we had by and large good faith compliance as Dean said in various ways. And it was really an issue of, yes, there were some constitutional arguments and there were some disagreements about the kinds of documents that we were entitled to and that were worthwhile to us. But part of that goes to what we've been talking about for the last hour, which is how did these social media companies treat January 6th and the election de-legitimization campaign by President Trump as a discreet, dangerous phenomenon as opposed to just another run of the mill content moderation hurdle that didn't really deserve special attention. And I think in many cases we got documents from a lot of these companies dealing with the election as I was just saying, and they had less to say about the period between November and January.

And that is not a problem of their non-compliance with a subpoena. That's not a problem with our subpoenas themselves. It's really a problem of these broader questions of social media companies ability to react and our society's ability to react to this authoritarian moment we faced and in many ways are still facing. And I will also note that we were able to have conversations with these companies. I think almost every company we either requested information from or subpoenaed, we spoke to them in sort of a substantive briefing setting. And then some of that has shown up in the report. Some of that didn't make it into the report. And so it was an impressive acknowledgement that this is an important investigation that requires a response. The depth and tenor of the response was obviously always going to vary.

Justin Hendrix:

Okay. I've just got a couple of last sort of areas I want to kind of get into. And I know that we've got this running through 10:30, and I don't want to make this podcast last forever, but I do want to make sure I just hit a couple of more bits. One thing I want to ask about as well is the intersection between the investigation into social media and violent extremism. You've talked about the extent to which they seem to overlap. There has been some criticism of the report, notably in Just Security, Jon Lewis from the GWU program on extremism sort of found the report to be slightly pulling its punches on the role of extremism in the United States. Can you talk about that nexus? Do you feel that it was well enough understood or well enough kind of investigated? And were your teams, the teams focus on those things tied tightly enough together?

Jacob Glick:

Well, I'll start. I don't want to monopolize, because I know Meghan and Alex will have something to say on this too. I was sat on both those teams really in terms of the depositions. And it was the same team in many respects. We worked very closely with the red team on all these depositions, and we made sure that there were questions being asked about social media use and influence in all of those depositions. I think that as with a lot of topics in the report, there's not a complete treatment of every single issue, and that's to be expected. But again, we have reams of transcript material from these extremists of varying stripes that really paint a compelling picture of how this problem goes far beyond the Oval Office.

And while that wasn't the main focus of the report and there are good reasons for that, it's now up to people like us to make that story available to the public. And Meghan and Alex I know had a lot of research done on the impact of social media and on trends in extremist spaces, but we, I think, did a pretty good job of exploring that nexus deposing folks like Jody Williams and Jim Watkins and getting documents from people in and around their circle. And also more traditional media figures like Charlie Kirk and others. Not all of them were cooperative. Nick Fuentes was not cooperative when we asked him questions about social media, but that's the nature of the investigation.

Justin Hendrix:

Meghan, anything to add on this subject? Let's say the committee were going to extend for another year and you had another year to look at this connection between violent extremism and social media. What would you want to look at?

Meghan Conroy:

That's a great question. I think we laid out the case pretty well in our work. I mean, I think our team did a really good job in terms of showcasing the ways in which extremist belief systems and just blatant disinformation navigated its way through and propagated through a broad media ecosystem inclusive of both social media and legacy media and people who are kind of trusted voices in the mainstream, how they were absolutely parroting propaganda pushed by bad faith actors. And so I think that case was made. I think, you know, we have people like Mary McCord who has written at length about ways to handle paramilitary organizations legislatively and from a law enforcement perspective.

And we solicited– the purple team– we solicited over 70 pieces of expert witness statements from a slew of folks from different disciplines, different sectors, and on a range of topics. I know Jacob led the charge as far as getting experts to write about Christian nationalism. We had folks on specific platforms and those platforms’ affordances and what was happening on those platform in the lead up to January 6th. We had people writing about conspiracy theories. Actually, CTEC did a great statement on conspiracy theories and how the various glue that brings all these folks together and brought them together on January 6th. And I think we have the answers as far as that relationship. This was answered by entities besides the committee as well. Obviously Just Security has done some phenomenal work on it. The Election Integrity Partnership has done amazing work. New America and Arizona State did great work.

And it's just a matter of taking those findings and turning it into meaningful legislative recommendations that are actionable. If we could spend some more time on legislative recommendations, I think that would be a good use of our time.

Alex Newhouse:

I think there is a legitimate conversation to be had there around the fact that Trump, the MAGA movement and this sort of these hardened extremists who consider themselves essentially foot soldiers of the MAGA movement are all together basically one symptom of a much, much broader problem. That's been sort of said ad nauseam, but it's worth reiterating here. Social media and the internet has supercharged this process of decentralized, amorphous, large scale radicalization that gave rise to January 6th. And the report through a lot of legitimate reasons, a lot of very appropriate decision-making on the part of the members of Congress focused on that one specific focal point of this much larger trend. But it's worth saying that the things that we're seeing today, this ability for something like January 6th to occur, the ability for QAnon to as this very, in a vacuum basically truly off the wall unhinged conspiracy theory to radicalize literally millions of people throughout the world, all of those things are driven by the changes that have been wrought on the fabric of social interactions by social media, by the internet.

You don't get the same sort of massive geographically decentralized mobilization and communications lines without social media. You don't get the same ability for these, who would otherwise be basically niche celebrities, being able to consolidate audiences of millions of people to blast violent propaganda to without social media and the internet. So all of these things that we see, they're so linked together, they're so interconnected and the trends in radicalization, in violent extremism that we're seeing are basically being magnified by multitudes, by significant margins by the virtue of the mechanics of social media and the internet that we're seeing today.

So the final report from the committee told one piece of the story. I think our roles here, myself and my colleagues in the coming months, the coming years, is to try to help tell the rest of the story, to try to fill in the gaps and say, hey, Trump is obviously an instrumental part of this. The Proud Boys and the Oath Keepers are obviously an instrumental part of this, but if we truly want to start dealing with the root causes of the fragility of the American democracy and of Western democracy across the world, we're going to have to start dealing with the fact that social media is causing these fundamental changes in social interaction that are giving rise to the possibility of basically mass scale violent extremist mobilization.

Jacob Glick:

And I think that's a really good point, Alex. One thing that occurred to me while you were speaking was how we always envisioned the work of the purple team as the cement that would hold together this broader story of extremism and an authoritarian crisis point. And I think if you read the report and you read through underlying materials, we did that, and the committee was happy that we were able to provide that evidence. And it fills in the gaps, as you said, Alex, in a story that's about President Trump and backroom conversations and a lot of the subversion campaigns that we saw across the country in state houses and in the halls of Congress. That story of extremism in social media is sprinkled throughout in a way that heightens the stakes for the American public. But it's obviously only the beginning of the story, not the end.

And we should view it as such. It's a starting point for us to expand public education and eventually legislative reforms around these issues. And it's better, I think, to view the report's treatment of extremism and social media as such, looking at it as sort of the connective tissue of a story that's about Trump and about the eight-week period between the election and January 6th, and not really trying to view it as something that's first and foremost a social media investigation or an extremist investigation, because that's not what the immediate task for the committee was. And so with limited staff and limited time, I think we were able to infuse those narratives and now we got to pull them out.

Meghan Conroy:

I will say, yeah, I think exactly what Jacob said. I think the committee did a good job in the report of laying out just the facts. And now this is why like Dean said in the beginning, why we are now coming out and being like, okay, we have just the facts. Let's add some color, let's add some analysis and let's keep building on this. And I think, yeah, like Jacob said, I think the report did a good job in laying out the evidence exactly as we saw it. Yeah, so we're just hoping to add color to that.

Justin Hendrix:

I do want to end on recommendations for the future. The report, of course has its recommendations, none of which were specific to social media or to the questions about tech more generally, but what are your recommendations if now you do what Jacob mentioned and you go on, you advocate for different forms of solutions to the types of problems we've discussed today, where do you think we have to start?

Dean Jackson:

I guess I'll start while others organize their thoughts. I, over the course of my time with the committee staff started to think about social media's impact on American democracy. And on January 6th as maybe a series of concentric circles, and the middle is the lead up to the sixth itself. Things like the December 19th tweet, Trump's "Be there! Will be wild!" Tweet which served as a catalyst that extremist to target a specific date. Things like the Stop the Steal movement and it's unchecked spread. That's the middle. And I think there were real failings there that we uncovered. The next ring out I think of is the response to the election and specific measures like Facebook's break glass measures, which were very complicated and a large and complicated suite really of interventions to do things that are more sophisticated than make individual content moderation decisions. Things like using AI to make a probability judgment about whether or not a piece of content is violent incitement, and then potentially demote that content and user's news feeds, right? Because Facebook knows there's so much activity on Facebook every day that they can't hire enough people to look at every single piece of content, and they're forced to rely on machines to triage some of this work.

That I think is a really interesting area to look at for future recommendations, because it sits between the really concrete decisions made around January 6th and a broader argument, what I think of as the outermost concentric ring, which is the argument that social media has accelerated content that is poisoning our public sphere. It's in that middle ring where really interesting conversations about the role of artificial intelligence in public discourse are happening, and where I think there's actually a lot of potential to do things like promote healthier conversations and diminish the prevalence of violent rhetoric and conspiracy theories and hate speech and all these sort of toxic elements that you could argue raised the temperature of public discourse to a point where something like January 6th could happen.

But we don't have any transparency to that space. We don't have knowledge of the break-glass measures in any detail because Facebook willingly published them online, right? We have them because of leaks and because of investigations. I think two things Congress could do would be one, to pass meaningful transparency laws to give researchers access to data on measures like that and on social media activity. This is not a novel recommendation. There are already bills written that would address this problem in different ways. Europe is charging ahead with the Digital Services Act, which has components here in this specific problem. But it's really, really important because it serves as what one expert during the investigation phrased as, "the factual predicate for empirical policymaking." So it's foundational to better recommendations in the future.

But the other thing I think is higher level that Congress could do is lead a public conversation about these things. Go beyond, "Well, I saw this thing on Facebook and it was troubling, and it should have come down," to really help the American public and lawmakers understand these processes, the ways that social media platforms work, the ways they elevate certain types of content, the way they shape conversations, and the ways in which those are both harmful and helpful.

The Black Lives Matter movement was able to mobilize using social media, but so are extremists. How do you preserve healthy social movements and the ability to create change while limiting the organizational potential of those who would really aim to destroy and subvert democratic process? I don't think the public conversation is really in a sophisticated enough place to first off build consensus around that and then make concrete recommendations. But Congress could convene maybe some kind of blue level commission to have a good faith conversation about that and to really raise the level of understanding so we might begin moving in that direction.

Jacob Glick:

So two that occurred to me are actually a little bit more on the extremism space than the social media space. So I was going to wait to the end, but I can go now too. One thing that's I think really important to grapple with is how do we diffuse the continuing threat of paramilitary violence and political violence in general? And one thing that ICAP has been working on, and Mary McCord's been working on for several years has been a federal prohibition on private paramilitary activity consistent with the First and Second Amendment. And so I think something like that, a federal statute to deal with illegal activity by militia groups is a really important conversation for Congress to have, because too many states and too many local law enforcement entities don't realize they can take action against these groups. Because there's been a lot of propaganda and political will to not act against the Proud Boys when they show up armed to a drag show or the Oathkeepers when they show up armed to some sort of federal standoff.

And so as we see these threats evolve and continue to gain strength after January 6th, that's something that Congress needs to work through, not only in the sense of a paramilitary statute, but also giving direction to federal law enforcement and local law enforcement, perhaps by thinking about conditioning aid to local law enforcement on the premise that there needs to be sufficient anti-extremism training or insider threat trainings. There was a lot of provisions within the recently passed funding bill that stripped out requirements that would've made it more important for federal law enforcement to look at problems of white supremacy and extremism within their ranks, and also reevaluating their approach to far right extremism in general. And I think that was a grave mistake and something that should be looked at in the future when there is a political opportunity to do so. But of course, we need a speaker of the House first. So...

Alex Newhouse:

Yeah, to add on here, I'm not a federal policy specialist, so I can only talk a little bit to it, but I can't talk to some recommendations I have for tech companies. But just to start off with, I do think we're seeing at the law enforcement level in recent months some additional creativity in the use of existing statutes to apply to domestic terrorism and domestic violent extremism. There was a case in Ohio that I think is in sentencing right now. That was, I believe, the first case to use the material support for terrorism statute against domestic extremism. It was a white supremacist plot against infrastructure. And I think that there's opportunities there to creatively use statutes like that to go after organized networks of individuals who plan attacks on infrastructure, government, et cetera. So I definitely want to emphasize that there should be additional lines of education and training and exploration of those kinds of things. Because I think, like Jacob and Dean were saying, a lot of times local jurisdictions and even regional jurisdictions just simply don't know how to use the resources that are already available to them.

On the tech platform side though, I do want to offer a couple recommendations. I think the core one is that there absolutely needs to be massive increase in investment in cross-platform collaboration, cross-company collaboration. The Global Internet Forum on Counter Terrorism, GIFCT, does incredible work. They're an awesome start. But I think their main product is a database of hashes of pretty overt extremist content like manifestos and livestreams and that kind of thing. Amazing product, incredible product, works really well. But I think that's the starting place, right? That's the foundation, is getting a handle on collaboration across companies for very overt stuff like mass shootings. There needs to be an increase in investment of how do we handle in a cross-platform way threats to democracy, threats to civil society, those kinds of things. We're getting there. And there are certainly some avenues for that, and there's some foundations doing awesome work. But I think if there are representatives of tech companies out there listening to this, pushing on getting more investment in those types of collaborative spaces is probably the single best thing you can do.

Meghan Conroy:

And just bouncing off of what Alex said with regard to cross collaboration and understanding the way that information and users and influencers move across the information ecosystem, I think we should be collectively, and this goes for the tech sector, this goes for researchers and academics, this goes for practitioners, this goes for policymakers. I think we need to update our frameworks through which we understand both social media and extremism and the way that those intersect. And that should be a networked approach.

So obviously we have the platforms which have created an ecosystem full of echo chambers, or however you want to refer to it, that have facilitated the rapid spread of disinformation and extremist thought. And that also applies to the extremists themselves. Dean referenced the post-organizational extremist landscape earlier, and that's exactly how it is situated at this point. Of course, the Proud Boys, Oathkeepers, various militias play a role, but the role of those groups has changed. And I think law enforcement and those making policy recommendations, those studying these groups, should update their frameworks to understand the extremist threat as one that is very much about the network. I know Alex and I have probably said that thousands of times to people throughout our [inaudible 01:18:09] committee is that the threat is the network, and that applies to both social media and the extremist threat.

Justin Hendrix:

In many ways, we're beginning to see a backlash against the study of these topics. There are efforts to impede the research of academics who are looking at these questions. There are, of course, those proponents of the Twitter files who would suggest that a lot of the activities taken by the platforms in advance of the election, perhaps with regard to the suspension of Donald Trump and the decision to do that on multiple platforms, but also with other content moderation decisions that were taken around Hunter Biden's laptop, for instance, prior to the election.

All of those things were sort of inappropriate, that they were meddling with free expression. And that essentially a lot of perhaps what we're talking about here, they would regard as part of the problem. I don't know, how do you answer that? And we should talk a little bit about the suspension of Donald Trump. And that's an open question, right? I mean, Facebook's about to decide whether to let him back on. Twitter, under Elon Musk, has essentially restored his account, though he hasn't tweeted. YouTube, we're not sure exactly how they'll make that decision, but apparently could put him back on the platform at some point as well.

Dean Jackson:

I'm sure you'll have thoughts on this too, but I'll start with the Trump ban and the Twitter files. Because what I think is kind of amazing about that discourse, about that argument... The argument to summarize being that Trump was held to a unique standard when Twitter decided to remove him from the platform, that previous world leaders had made statements that contributed violence and remained on the platform. And that there was this internal discourse at Twitter that revealed that Twitter really didn't have a standard or a rule that led directly to his removal. That it was a sort of shooting from the hip decision by top executives that exhibited some form of bias against the former president.

When I saw this come out and I saw the messages they were drawing from, and some of them involved some of the same people or the same topics that we had also looked at, and I sort of thought backward in time from January 6th over the many conversations happening within Twitter during the 2020 election. And it was really amazing to me that they had reached that conclusion.

Because what we saw and what we found was actually a company that was really not prepared to take strong action against Trump, that avoided making rules that would have penalized him or his supporters for activity that was objectionable. And then on the 6th itself, what I saw actually was what you would want to see within a social media company, making a decision of that historic gravity, which is debate, dialogue about whether or not Trump had crossed a line, whether or not they're needed to be an action taken. And you saw some of the people who talked to the committee who are now on record with their views about the Trump ban saying, "I'm not sure this rises to the level of implicit incitement." But what won out within the company was a sense that, "Well, when you look at the context in which Trump is making his statements," which was the immediate aftermath of an attempt to prevent the peaceful transition of power, the end of a really good run for peaceful handoffs between presidential administrations in the United States and its democracy.

When you looked at that context, and then the way in which Trump's statements were being received by audiences, a decision was made that he could no longer be allowed to use that platform to reach massive numbers of people. And we have freedom of speech in the United States, but we don't necessarily have freedom of reach, right? You don't have a constitutional right to broadcast your views to millions of other Americans. And so Trump certainly didn't have the right to use Twitter's platform and weaponize it in the way that he had been and that he continued to do after January 6th. And they made, I think, a belated decision to finally take dramatic action. It took an attempted overthrow of the United States government to move them to finally hold him to a standard, and that they had avoided holding him to for months.

And so I think the argument is completely backward that they held him to too high of a bar. We can argue about where the bar should be, but the fact is that they really waited until really it was too late to hold him to the standard he was eventually held to. And I think we have to remember that context when we think about the possibility of him returning the social media now.

Jacob Glick:

I would build on that and put a even finer point on it that it actually wasn't enough, inside of the insurrection, for Twitter to ban Donald Trump. And it's a really important reason to correct the record on some of the commentary around the Twitter files. The reason why, at least what our investigation uncovered, what our depositions uncovered, is that the reason why Twitter took the final step to ban Donald Trump is because there was a specific coalescing of his supporters around another violent action on January 17th, 2021. And that spread across social media platforms and was in response to his tweets that he would not attend the inauguration. And then there was another tweet that I'm now forgetting, blissfully forgetting. So those tweets on January 8th were the final straw. Once Twitter employees alerted Twitter supervisors about this very specific threat of far-right violence on January 17th across the country and in Washington, D.C.

And we saw other evidence about Oathkeepers planning for a muster after January 6th and other websites like thedonald[.]win planning for another violent call to arms on January 17th. And so to say that Twitter took preemptive or politically biased action against Donald Trump ignores the fact that they were more than willing to let him back on the platform as long as he didn't specifically do the one thing they didn't want him to do, which was incite another violent attack against the United States of America. And he still had the ability to do that, and promptly did that on January 8th. So I think there were some documents that said his tweets didn't rise to level of incitement, our understanding is that while that might have been the first blush impression of Twitter employees, once they saw what was going on beneath those tweets, as Dean said, they saw another very specific ongoing threat to our democracy. Which by the way, Twitter employees we spoke to said they saw the same dynamic at work when Trump called for attacks against the FBI, or demonized the FBI, after their raid of Mar-a-Lago.

And we had deposition testimony saying that what they saw on social media, including on Twitter, was a similar call and response dynamic between Trump and his top acolytes and then run of the mill everyday Americans trying to endorse political violence. You saw that again after the attack on Paul Pelosi and Trump's refusal to condemn outright this act of political violence. So that dynamic continues. And so Trump's re-entrance onto the social media sphere in any mainstream way will likely yield those same results again. And the question that people should be asking if social media companies try to prevent him from coming back onto social media or take him off again, is not, "Are they biased against him?" but "What specific threat of violence is he fomenting this time?"

Justin Hendrix:

I totally agree that one of the things that's never considered is the counterfactual. What if Trump had attempted to remission the National Guard troops that were put at the Capitol after January 6th? What if the Oathkeepers had decided to activate that armed quick reaction force that was just across the Potomac? There was some discussion of doing that in the days after January 6th. We'll never know that to some extent. And it seems very clear to me that these companies ought to be weighing the current threat in their decision to potentially restore his account. So we'll see.

This has been a wide ranging discussion. We've talked about a lot. I would absolutely encourage my listeners to visit the trove of materials that you have produced to read these depositions. I intend to use some of them in my class this semester as a way of getting at some of the most significant issues at the intersection of technology and democratic processes. And I hope that some of the learnings that we've taken from your work will be applied not just here in the United States, but elsewhere around the world. Where certainly people have come to me on many occasions in conversations in this podcast and said, "We've got to get our head around these issues and we've got to get these American companies to address these issues because our democracies, in fact, are perhaps even more fragile." And this autocratic moment that you referred to perhaps has come more frequently in other nations. So these are issues that are not just of import here. Dean, Meghan, Jacob, Alex, thank you so much for joining me today.

Alex Newhouse:

Thanks for having us, Justin.

Dean Jackson:

Thank you, Justin.

Jacob Glick:

Thank you.

Meghan Conroy:

Thank you.

- - -

More: Insiders’ View of the January 6th Committee’s Social Media Investigation, an essay by Jackson, Conroy and Newhouse.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics