Home

Donate

The Era of Big Tech Minimalism: Missouri v. Biden Gives Platforms Cover to Retreat From Election Integrity Efforts

Nora Benavidez / Jul 11, 2023

Nora Benavidez is senior counsel and director of digital justice and civil rights at Free Press,

U.S. District Court Judge Terry Doughty at a Senate Hearing in 2017. Source

Platform accountability and First Amendment advocates received a significant blow on the Fourth of July: U.S. District Court Judge Terry Doughty issued a ruling and preliminary injunction limiting contact between Biden administration officials and social media platforms over certain online content, temporarily banning officials from meeting with the companies over posts that violate their policies against hate and lies. The ruling attempts to use “free speech” as the shield to protect users and content on platforms. But its effect will only give more cover to social media platforms to do less, as they reduce the size of trust and safety teams and roll back election integrity efforts ahead of 2024.

Social media platforms are already notoriously opaque – offering little insight into their content moderation practices and enforcement decisions. Less is known about the discriminatory impacts of the machine learning tools they develop and employ. With the ruling from Judge Doughty, transparency is likely to remain elusive. But here’s what we do know:

Social media platforms have the resources needed to disrupt the spread of hate and anti-democratic disinformation — especially that which targets marginalized users who are part of various protected classes. Strong content moderation, like the “break-glass” measures Meta employed ahead of 2020 elections, really can mitigate the virality of harmful, violative content. When Meta turned those features off after the November 2020 elections, harmful content surged; in turn, that content created fertile ground for the “Big Lie,” which helped incite violence during the January 6th insurrection.

Accountability is hard won when it comes to Meta, Twitter, Youtube and other platforms. In fact, these companies sometimes amplify toxic and false information to users, which boosts their bottom lines. Platforms like Meta and YouTube profit from amplifying hate and disinformation, while Twitter has restored the accounts of some of the most hateful people that generate the sort of traffic and user subscriptions the company hopes will restore the revenues that were lost since Elon Musk took over the platform last October. According to data provided to Free Press by researcher Travis Brown, nearly a third of the tens of thousands of previously suspended accounts restored in Musk’s “general amnesty” have opted to subscribe to Twitter’s “verification” service.

We also know that governance decisions affect the bottom line at these companies: gutting staff who effectively moderate content – as in the case of Musk’s Twitter – has contributed to catastrophic financial losses for that platform. U.S. ad revenue on Twitter has plummeted some 60 percent over the past year, due in part to Musk’s full-fledged retreat from trust and safety efforts. This, in turn, has undermined Musk’s ability to keep up with massive debt payments expected by the banks that helped finance his $44 billion purchase of the platform. Musk’s failed experiments at Twitter should have taught a hard lesson to other platforms: content moderation is good for business.

Instead, we’re seeing negligence mushroom across social media. Big Tech has long evaded accountability when it comes to applying corporate policies robustly and equitably across the globe. YouTube recently reversed its election misinformation policy, explaining that there was no meaningful impact when the platform attempted to rein in lies about the result of the 2020 presidential election. Under the new policy changes, YouTube will stop removing content that advances false claims regarding fraud, errors, or glitches regarding past U.S. elections. By eliminating even the veneer of aggressively monitoring election-related hate and lies, YouTube is tearing a page from Elon Musk’s playbook.

Elsewhere, negligent platform action grows. Meta, in allowing Donald Trump back on its platforms, is failing to label or remove his videos, which contain lies in violation of the platform's written policies. (This video posted on Facebook by Donald Trump is an example which Meta did not label nor remove as per its guidelines, despite it appearing to violate the platform’s policy on election lies). TikTok has failed to permanently replace its head of trust and safety, who left earlier this year. With far-right users flocking to the newly launched Twitter rival Threads to promote hate while Bluesky fails to robustly moderate bigotry, it seems none of the platforms can effectively balance user safety and free expression.

Last October, my organization, Free Press, graded the four major platforms’ policies against 15 recommendations that they should all be implementing to curb the spread of election disinformation and extremism across their networks. Our research found that although tech companies have long promised to fight disinformation and hate on their platforms, there is a notable gap between what the companies say they want to do and what they actually do in practice. Companies like Meta, TikTok, Twitter, and YouTube do not have sufficient policies, practices, AI systems or human capital in place to materially mitigate harm ahead of and during election periods. And even suggestions on ways to improve – by civil society and government – have been met with indignation and inaction by these companies. Congressional hearings yielded little insight into platform business practices. Letters from civil society and dignitaries asking for data on their algorithms, enforcement and staffing also produce little substantive information.

Judge Doughty’s ruling certainly won’t advance dialogue and coordination. First, his ruling is overly broad, grossly overlooking First Amendment precedent on these issues. The mere act of contacting social media companies regarding violative content, including lies about COVID and elections, shouldn’t in itself be considered an ‘attack against free speech,’ in the judge’s words. In his preliminary injunction, the judge bans White House officials from even requesting internal platform reports about social-media companies’ actions. The ruling allows for several exceptional instances in which the government may contact platforms to flag threats to public safety, national security, and foreign election interference. Legally, these carve-outs provide a dangerous opening for platforms and future courts to selectively determine what constitutes acceptable speech the government can ask about and speech it cannot. And practically, by the time platform content has reached the level of a public safety, national security or election interference threat, it’s likely already gone viral and it is dangerously late in the game to mitigate visibility and real-world harm.

This is the wrong direction, moving us away from transparency under the guise of fighting censorship and likely compounding an already dire problem. To be sure, we should be wary of government intrusions into private speech, including efforts by officials to influence social media platforms’ behavior. We know how officials sometimes abuse their power to limit speech from dissenting and minority opinions. From fighting government retaliation against protesters to suing and successfully settling a lawsuit against former President Donald Trump for his retaliation against the press, I’ve long advocated for staunch safeguards against government overreach into private speech and to mitigate abuse from those in power who seek to silence speech they dislike.

But Judge Doughty’s ruling is an imperfect cover for platforms to do less ahead of 2024 in a new era of low-achieving platform behavior. Cordoning off communication between tech and government will surely accelerate the spread of COVID and election lies that Free Press and our partners have worked so hard to draw attention to and contain over the last several years.

We already see the chilling effect of Judge Doughty’s ruling: the State Department has just canceled a meeting with Meta. Many of us spent years encouraging platforms to better engage with civil society, researchers, campaigns, and government agencies to guard against national security risks and threats to democratic engagement for voters. As the United States gears up for the biggest election year the internet age has seen, we should be finding methods to better coordinate between governments and social-media companies to increase the integrity of election news and information.

With license to shirk communication with government officials, who’s to say platforms won’t further disregard other voices seeking data transparency and accountability from these companies that are tasked with such a huge role in our public discourse? I and other experts suspect that tech companies will likely skirt accountability by doing the bare minimum. When it comes to election lies and other falsehoods left unchecked on platforms, it will be voters and democracy that suffer. Lies about the 2020 election – despite a litany of evidence that the election was, in fact, the most robustly certified and validated election ever – continue to permeate online discourse and even inspire anti-democratic laws that limit avenues to vote and the authority of local election offices.

Judge Doughty’s misguided ruling will only fan the flames of Big Tech minimalism under the guise of protecting free speech. Unfortunately that won’t serve users on platforms, the platforms’ long-term business interests, nor voters in the real world.

Authors

Nora Benavidez
Nora Benavidez is senior counsel and director of digital justice and civil rights at Free Press. She is the lead author of Big Tech Backslide (2023), a new report from Free Press examines how tech companies’ retreat from platform integrity harms democracy, as well as Empty Promises (2022), analyzing...

Topics