An interesting exchange occurred late in a hearing today in the House Committee on Science, Space & Technology Subcommittee on Investigations and Oversight. The hearing set out to address the need for independent researchers to access social media data. After more than an hour’s worth of discussion on the topic, Representative Sean Casten (D-IL6) put a question to the panelists: academic researchers from NYU, the University of Illinois at Urbana-Champaign, and Northeastern University.
Referring to the January 6th attack on the U.S. Capitol by supporters of former President Donald Trump, Casten asked:
Is it reasonable to assume … that given how much was being amplified on Facebook, that a bunch of smart computer nerds at Facebook had knowledge a priori of what was being organized?
By reply, Dr. Kevin Leicht, Professor, University of Illinois Urbana-Champaign Department of Sociology, said:
I think it’s possible. It’s also possible that nobody at Facebook actually bothered to pay attention to what their algorithms were recommending. Whether there was deliberate promotion or- a better description would be I suppose a benign neglect of what the algorithm was doing- in either case there are invidious problems there. You know, whether an actual person was involved or not.
“I do want to make clear,” said Rep. Casten, “sometimes we get caught in our own knickers when we say, well, sure something is immoral but it’s not illegal so it must be okay. For my money, if I had the capability to anticipate that there was going to be an attack on the U.S. Capitol and I didn’t give a damn? There has to be some responsibility there. Shame on us if it’s not illegal, but my goodness don’t look the other way.”
Laura Edelson, a PhD Candidate at NYU Tandon School of Engineering, replied:
I remember the morning of January 6th, because I told my team that morning that I thought it was going to be a bad day. Because this is, you know, this is what I live and breathe. I look at this stuff every day and it’s awful. I don’t know if anyone at Facebook knew it was going to be a bad day. I don’t work there. But one of the things we do know is that their internal research has been telling them about the extremist problem. For years, they knew that their algorithm was promoting hateful and extremist content. They knew that there were fixes. They knew that those fixes might come at the cost of user engagement and they chose not to put those fixes into place. So as to whether anyone knew on January 6th, I don’t know, but they knew about the problem. They knew how to fix it, and they chose not to.
Edelson and Leicht are correct that Facebook had the necessary information and internal research to generally understand the role that their platform played in driving polarization and extremism. The revelations reported by the Wall Street Journal confirm that. And, they knew they were taking a gamble by continuing to permit President Trump to use the platform after he crossed the line repeatedly in 2020, suggesting violence in response to the protests following the murder of George Floyd and spreading false claims about the election.
In fact, Mark Zuckerberg himself warned about the general possibility of civil unrest after the election in the summer of 2020. The Wall Street Journal’s Jeff Horwitz and Deepa Seetharaman reported in May 2020 that the company shelved “efforts to make the site less divisive,” and on January 31st, 2021 that the “company’s data scientists had warned Facebook executives in August  that what they called blatant misinformation and calls to violence were filling the majority of the platform’s top ‘civic’ Groups,” with an exemplary group of 58,000 seeing “enthusiastic calls for violence every day”.
But further, as New York Times reporters Sheera Frenkel and Cecilia Kang report in their book, An Ugly Truth, specific teams inside Facebook were monitoring the run up to January 6th and watching in horror as the attack unfolded. Recounting how indictments of the insurrectionists contain details of their communications on Facebook, including preparing for violence, Frenkel and Kang tell of the movements of one Oathkeeper militiaman and the corresponding actions of Facebook executives across the country:
Facebook’s security and policy teams were aware of the activity and were growing increasingly alarmed. When journalists reported the Red-State Secession group to the company on the morning of January 6, an answer came back within hours that the security team had been reviewing the page and would remove it immediately. But while the platform was moving quickly to try to remove groups and pages calling for violence, they could not undo the simmering anger that for months had been building across thousands of pages. Under the banner of “Stop the Steal,” a motto that underscored a wide-ranging allegation that the election had been stolen from President Trump, thousands of people had been mobilized to travel to Washington and take action.
Once in Washington, people were freely celebrated on the morning of January 6 with posts on Facebook and Instagram showing the crowds that gathered to hear President Trump deliver an address. Minutes after Trump ended his speech with a call to his supporters to “walk down Pennsylvania Avenue” toward the Capitol Building, where hundreds of members of Congress sat, people within the crowd used their phones to livestream clashes with police and the storming of the barricades outside the building. Many, including [Oathkeepers leader Thomas] Caldwell, were getting messages on Facebook Messenger from allies watching their advance from afar.
“All members are in the tunnel under capital [sic],” read the message Caldwell received as he neared the building. Referring to members of Congress, the message added, “Seal them in. Turn on the Gas.”
Moments later, Caldwell posted a quick update on Facebook that read, “Inside.”
He immediately began to get detailed instructions through an onslaught of Facebook messages that encouraged him to “take that bitch over.” “Tom, all legislators are down int he Tunnels 3floors down,” read one message. Another instructed Caldwell to go through each floor “from top to bottom” and gave him directions which hallways to use. He was hunting for members of Congress, in a mission that he and other members of far-right groups across the United States saw as an act of insurrection and revolution.
Thousands of miles away, from their homes in the verdant suburbs surrounding [Menlo Park], Facebook executives watched with horror. On the advice of the security team, who warned that there was potential for violence in Washington that day, the group held a virtual meeting to discuss contingency plans. It had been two months since the U.S. presidential election had ended, but executives felt like they had been holding their collective breath since November 3. “None of us had had a chance to exhale. We were still waking up every day feeling like the election wasn’t yet over, and all eyes were on our response to Trump’s unwillingness to concede the race to Biden,” recalled one of the executives.
For emphasis: “…the security team, who had warned that there was potential for violence that day…”. They knew of the possibility.
Whether they had put all of the pieces together enough to give an explicit warning to the government is another question. An internal company report published by BuzzFeed News indicates Facebook assessed after January 6th that it had failed to recognize and pre-empt the harm posed by the Stop the Steal movement that metastasized across its pages and groups, and hypothesized what actions it would have to take to do so in future.
A key question that the January 6 Select Committee and Congressional investigators on other committees should ask is what warnings Facebook provided to the FBI or to DHS or any other law enforcement or government entity, and through what channels. We know that Parler, the social media app favored by Trump supporters, sent specific warnings to the FBI on numerous occasions ahead of the attack on the Capitol. Facebook has not answered my queries as to whether it sent such warnings.
But to Mr. Casten’s query about the moral question of Facebook’s culpability, consider two more things:
First, just days after the insurrection, Facebook COO Sheryl Sandberg said, “I think these events were largely organized on platforms that don’t have our abilities to stop hate and don’t have our standards and don’t have our transparency.” Charging documents would soon prove her so wrong we can only wonder why she would have ever said such a thing.
Second, at a House Energy & Commerce subcommittee hearing in March, Facebook CEO Mark Zuckerberg refused to take any responsibility for the January 6 attack. In his opening statement, he went so far as to dismiss suggestions that his company stokes divisiveness. “Some people say that the problem is that social networks are polarizing us, but that’s not at all clear from the evidence or research,” he testified. He pointed to alternative culprits: “I believe that the division we see today is primarily the result of a political and media environment that drives Americans apart.”
Are these the statements of a leadership team that feels remorse, knowing the evidence it had that its business contributed to the violence on January 6? Are these the statements of people we can trust to run a communications platform for 3 billion people that perturbs our politics in ways we simply do not understand?
To that question, the answer is clear.
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.