Home

Facebook’s Revisionist History

Justin Sherman / Jan 14, 2021

On January 11, after the attempted coup at the U.S. Capitol incited by President Donald Trump, Facebook’s chief operating officer Sheryl Sandberg took to the airwaves to downplay the extent to which the violence and insurrection were plotted on the platform. “We have clearly established principles that say you cannot call for violence,” she said in remarks related to the decision to suspend Trump’s account. “In this moment, the risk to our democracy was too big.”

While it’s a nice-sounding statement, this is just the latest example of Facebook’s promotion of revisionist history—painting over its record of inaction and problematic behavior with the color of its most recent and visible enforcement actions. The company senses the coming wave of regulatory proposals, and these anti-regulatory tactics are not going away.

In Sandberg’s telling of the tale, Facebook was comprehensively working against white supremacist and extremist groups ahead of January 6. “We know this was organized online. We know that,” she said of the attempted coup at the Capitol. “We, again, took down QAnon, Proud Boys, Stop the Steal, anything that was talking about possible violence last week. Our enforcement’s never perfect, so I’m sure there were still things on Facebook. I think these events were largely organized on platforms that don’t have our abilities to stop hate and don’t have our standards and don’t have our transparency, but certainly, to the day, we are working to find any single mention that might be leading to this and making sure we get it down as quickly as possible.”

Part of Sandberg’s assertion is technically correct. On November 5, for example, Facebook took down a key “Stop the Steal” group on its platform. But the “Stop the Steal” group had amassed hundreds of thousands of members by the time Facebook removed it, and just weeks after that action, hundreds of new stop-the-steal Facebook groups had been created in its place. Similarly, Facebook was removing posts that praised Trump’s September debate call for the Proud Boys to “stand back and stand by,” though it still failed to catch other white supremacist organizing on the platform, including a rally in Portland spotted by the Tech Transparency Project, a watchdog organization. Other claims, such as that Facebook’s role in the incitement to violence was minimal compared to other platforms, were at best a twisting of the facts.

Yet focusing on the most recent and visible enforcement actions—what did Facebook do, or not do, right before the January 6 attempted coup—is buying into Sandberg’s framing of the discussion. It fits into the pattern of Facebook’s revisionist history, attempting to focus the dialogue on the most recent and visible actions of the day, such as suspending Trump’s account or taking down the “Stop the Steal” group, as if to say, “see, we did a fine job in the end,” while obscuring the platform dynamics that contributed to the problems in the first place.

Buying into this framing obfuscates the bigger picture: Facebook has served as a continual vector for Trump’s election disinformation ever since his 2016 campaign began. Prior to last week, Facebook merely labeled Trump’s lies (just as Twitter did) rather than removing them outright; chief executive Mark Zuckerberg letSteve Bannon keep his account after he called for the beheading of two U.S. government officials. Not to mention the company built and maintains an ecosystem in which extremist views thrive. Not to mention Facebook’s central claim is that a platform, built for microtargeting and content virality above all else, magically has the incentives to fight its own systems and take down financially lucrative content that uses those very tools. Not to mention that executives keep denying that problems even exist while shutting down efforts to address them: Yann LeCun, the company’s chief AI scientist (who continually refuses to listen to tech ethicists), himself said on January 9, “propaganda outlets, not social media, are the source of harmful disinformation,” dismissing the idea that there has been any evidence produced since 2017 that Facebook contributes to polarization. It’s hard to square that with recent headlines such as “Facebook Has Been Showing Military Gear Ads Next To Insurrection Posts.”

This has happened before. In October 2019, during a spate of Congressional hearings on social media platforms, Mark Zuckerberg said in a Georgetown University speech that his early motivations for starting Facebook were linked to the Iraq War. “I remember feeling that if more people had a voice to share their experiences, maybe things would have gone differently. Those early years shaped my belief that giving everyone a voice empowers the powerless and pushes society to be better over time,” he said. It was a blatant rewriting of history. It chalked up the platform’s foundation as one of political empowerment and opposition to war, instead of highlighting that Zuckerberg’s foray into digital networking was a misogynistic and privacy-invasive website for men to numerically score the headshots of women classmates. It also, conveniently, aligned with Facebook’s visible push to portray the company as a beacon of political free speech and empowerment contra technology platforms in China—the firm’s latest anti-regulatory lobbying narrative.

Facebook has served as a continual vector for Trump’s election disinformation ever since his 2016 campaign began.

Again this past year, Facebook did the same: amid nationwide protests against systemic racism and police brutality, Mark Zuckerberg posted that “Black lives matter” and touted the company’s apparent reconsideration of its content policies on incitements of violence. Yet this rhetoric again ignored the company’s history. It ignored the fact that it took an extensive advocacy campaign by Color of Change for Facebook to implement a more proactive civil rights policy, on issues like hate speech and targeting advertising. It ignored Facebook’s decision, which evidently caused backlash within the company, to not remove Trump’s racist and violence-inciting post in June that said, “when the looting starts, the shooting starts.” Facebook again worked to focus the conversation on anecdotal remedies rather than confront its systemic problems.

Every company engages in spin tactics; this is hardly unique to the Menlo Park titan. But Facebook’s revisionist approach to history is particularly dangerous given the company’s profound influence on the nation’s discourse, its contribution to last week’s attempted coup, and its reach and power in many different parts of the world. The more regulatory proposals that come down the pipe, the more Facebook will employ these tactics to shift the conversation into its preferred framings and to overstate the impact of highly visible content decisions at the expense of confronting its historical bad behavior. It is at these points where we cannot forget the company’s history, and where we must keep the focus on systemic solutions to the problems at hand.

Authors

Justin Sherman
Justin Sherman is the founder and CEO of Global Cyber Strategies, a senior fellow at Duke University’s Sanford School of Public Policy, a nonresident fellow at the Atlantic Council, and a contributing editor at Lawfare.

Topics