Home

Donate

Facebook’s Only Logic is Its Own Power

Justin Sherman / Nov 3, 2021

At the ongoing Web Summit in Lisbon, former British politician Nick Clegg, Facebook’s (now Meta’s) vice president of global affairs and communications, spoke Tuesday about his company’s role in propagating extremism and harmful content. The rhetoric he wielded—bad-faith arguments designed to dodge facts, avoid questions, and ultimately deceive viewers—now appears to fit a pattern that makes clear the only logic to the company’s arguments is its own power.

It has been a whirlwind several weeks for the company and its PR machine. In early September, the Wall Street Journal reported that internal Facebook research found significant issues with Instagram and teenage mental health that executives knowingly played down in public. Then, the Senate held a hearing at which Antigone Davis, Facebook’s global head of safety, testified and tried to downplay Instagram’s harm to young users. Shortly thereafter, Frances Haugen, who worked on Facebook’s civic integrity team, identified herself as the source of the leaks and testified before the Senate Commerce Committee as the documents she brought forward connected Facebook to a litany of concerns, including hate speech, disinformation, division, and even human trafficking.

In this context, Clegg’s remarks at the Web Summit underscored another dimension to Facebook’s hypocrisy. When the firm wants to claim credit for something it sees as good—from the Arab Spring movements a decade ago to voter mobilization in recent elections—it’s all about the platform’s groundbreaking role in driving social and political change. Yet when the results are negative, as with January 6 or harm to children, Facebook spins around and says how dare the media, regulators, or the public engage in such absurd techno-centrism.

Responding to questions about Haugen’s leaks at Web Summit, Clegg not-so-subtly implied that whistleblowers everywhere are not to be trusted, saying, “whistleblowers are entitled to blow whistles and describe the world as they see it.” Despite the disclosures of extensive research on the role of its algorithms by the company’s own researchers, he called the media’s portrayal of Facebook’s algorithms a “caricature.” Dismissing the copious evidence that hate speech is pervasive on the platform, Clegg instead claimed—without presenting any documentation to back it up—that most of Facebook’s content was “barbecues and bar mitzvahs” generated by users.

Similarly, when asked on CNN a few weeks earlier if Facebook at least contributed to polarization and other problems, Clegg said “of course you see the good, the bad, and the ugly of humanity show up on our platform as well. Our job is to mitigate the bad, reduce it, and amplify the good, and that’s what this research is all about.” Yet, Clegg went on to say the Instagram research was not “wholly surprising,” even though Facebook suppressed the internal findings (thus defying the notion of Facebook working its hardest to reduce harm). “We’re never going to be able to eliminate the basic human tendency to compare yourself to others,” he said—again, behaving as if that was the main and only critique to address. Clegg would not explicitly say that Instagram contributes to body image and mental health problems among vulnerable teenagers.

Regarding the violent attack and attempted coup at the Capitol on January 6, Clegg said, “if the assertion is that January 6 can be explained because of social media, I just think that’s ludicrous. The responsibility for the violence of January the 6 and the insurrection on that day lies squarely with the people who inflicted the violence and those who encouraged them, including then-President Trump and, candidly, many other people in the media encouraging the assertion that the election was stolen. I think it gives people false comfort to assume that there must be a technological or a technical explanation for the issues of political polarization in the United States.”

Clegg repeatedly employs this classic straw man tactic: constructing a bogus argument that, largely, people are not in fact actually making—and then punching it down as if that legitimately dismisses the concerns and the credibility of all critics. Despite many nuanced, varied, and fact-based critiques of the company and its conduct, Clegg pretends as if the only argument being made is that Facebook is the sole cause of all body image issues or political polarization. That is of course absurd, but that’s precisely the point. It paints Facebook’s numerous, informed, and intelligent critics—including its own employees, who are quoted extensively in the reports on the leaked documents—as irrational proponents of bizarre arguments, rather than observers of a company that continually places its own profit over the interest of the public and people worldwide.

Importantly, though, Clegg’s recent interviews also highlight that Facebook’s arguments are a means to an end—specifically, his claim on CNN that “it gives people false comfort to assume that there must be a technological or a technical explanation” for political issues of the day. Remember, the underlying logic of Facebook’s arguments is the positive value of its power. When Zuckerberg and team want to be perceived as helping the world, all the talk is about technology’s revolutionary power. The minute anyone identifies and calls out a harm, however, the company spins around and asks how on earth anyone could put the blame on technology.

Facebook has always tried to have it both ways. Back in 2011, Facebook executives made clear that they were not saying Facebook caused the Arab Spring. At the G8 summit, Mark Zuckerberg said of the movements that “it would be extremely arrogant for any specific tech company to claim any meaningful role in those.” David Fischer, a Facebook vice president, said later that year, “the social network gives people a way to express themselves and have their voice heard by governments and dictators—this was not so before… However, I think Facebook gets too much credit for these things. In the end, the people who make the revolution are the brave ones here.”

Simultaneously, however, Facebook leaned into the narrative of its platform as a world-changing force for good. Zuckerberg also said at the G8 summit, “I think that Facebook was neither necessary nor sufficient for any of those things to happen. I do think over time the internet is playing a role in making it so people can communicate more effectively and that probably does help to organize some of these things.” In a 2012 letter to shareholders, Zuckerberg wrote, “We believe building tools to help people share can bring a more honest and transparent dialogue around government that could lead to more direct empowerment of people, more accountability for officials and better solutions to some of the biggest problems of our time.” He added, “we believe that leaders will emerge across all countries who are pro-internet and fight for the rights of their people, including the right to share what they want and the right to access all information that people want to share with them.”

In 2013, Facebook published a ten-page white paper touting the positive power of its platform; Zuckerberg then told WIRED that increased online connectivity, including via Facebook, could globally increase quality of life and the power to hold governments accountable. Years later, he was still making similar claims: writing in a February 2019 post that “while any rapid social change creates uncertainty, I believe what we’re seeing is people having more power, and a long term trend reshaping society to be more open and accountable over time.” He added, “I’ll never forget how right after we launched News Feed, we saw millions of people organize marches against violence in Colombia. We saw communities come together to do viral fundraisers.” Even recently, in the summer of 2020, Zuckerberg wrote an op-ed in USA Today saying that “by giving people a voice, registering and turning out voters, and preventing interference, I believe Facebook is supporting and strengthening our democracy in 2020 and beyond.” (Of course, entirely “preventing” election interference is impossible.)

All told, the company line of connecting people worldwide to drive positive change dates back years, and Clegg’s arguments can be seen as an attempt to preserve that narrative in the face of so much evidence that complicates or, perhaps, refutes it. Even as Facebook attempted to distance itself from the notion of causing literal revolution, it advanced a general narrative of the network’s revolutionary power. Yet the minute executives are hit with questions about problems to which Facebook contributes, executives accuse the media, regulators, and critics in general of engaging in techno-centrist discourse that ignores political, social, economic, and other factors.

On CNN, for instance, Clegg primarily spoke about how Facebook was not the sole reason many Trump supporters and far-right extremists attacked the Capitol on January 6. The pendulum swung in the opposite direction from the narrative of disruption and revolution, as Clegg suggested Facebook had no influence at all on, and zero responsibility for, what happened. Never mind that “Stop the Steal” groups proliferated on Facebook after the November 2020 election. Never mind that Facebook only banned “Stop the Steal” content after the attack on the Capitol. Never mind that right-wing extremists used private Facebook groups, in addition to other online mediums, for months to plan the attack. And never mind Facebook’s broader cozying up to the Trump administration during its tenure—or its repeated excusals of racist, violent-inciting content posted by elected officials. The list goes on.

Using arguments as a means to an end is nothing new in corporate public relations; certainly, large companies, especially those facing public scrutiny and impending regulation, have long employed strawman and other arguments as it suits them. Google and Apple, for example, have themselves argued or implied in Congressional testimony and other media that positive things (more jobs, access to information) are directly tied to their companies while negative things (misinformation, anti-competitive market power) are either not issues or not entirely traceable to them. Yet Facebook’s pivoting on the relationship between technology and politics, and between its platform and real-world change, is a particularly blatant kind of manipulation. In this narrative, good things are directly linked to Facebook’s innovation, its global reach, its power; bad things are not even remotely tied to Facebook and instead result from political and other factors.

The real logic, of course, is that Facebook’s power is good—above reproach, even—and for the good of the world, it must keep it.

Authors

Justin Sherman
Justin Sherman is the founder and CEO of Global Cyber Strategies, an adjunct professor at Duke University’s Sanford School of Public Policy, a distinguished fellow at Georgetown Law’s Center on Privacy & Technology, a nonresident senior fellow at the Atlantic Council, and a contributing editor at La...

Topics