Home

Ben Sasse is right: the claims of Big Tech and its critics cannot be reconciled

Justin Hendrix / Apr 27, 2021

Another day, another technology hearing on Capitol Hill.

The Senate Judiciary Subcommittee on Privacy, Technology, and the Law, led by Senator Chris Coons (D-DE), could have pulled the title from an academic conference: Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds. Over the course of nearly three hours of testimony, two versions of reality scraped past each other: one in which the social firms have on balance improved the human condition and are making meaningful progress at dealing with the problems on their sites, which are statistically insignificant in the broader scheme of things; and one in which there is a decade's worth of evidence of the damaging effects of social media and its business model on individuals and societies, leaving democracy in peril.

Senator Ben Sasse (R-NE) paused about an hour into the discussion to try to force a constructive collision between these two realities, represented on the one side by technology executives, including Facebook Vice President for Content Policy, Monika Bickert; Twitter Head of U.S. Public Policy, Lauren Culbertson; and YouTube Government Affairs & Public Policy Director, Alexandra Veitch; and on the other by Harvard Shorenstein Center on Media, Politics and Policy Research Director Dr. Joan Donovan; and Center for Humane Technology President Tristan Harris. While the Senator promised he was not trying to get the participants "to fight," but rather to enter "into dialogue," he nevertheless noted that the answers the platforms were providing were simply "not reconcilable" with the positions of their critics. In particular, Senator Sasse wanted the platforms to address the fundamental charge that is levied against them: that their business models are responsible for the externalities they produce.

"You definitely aspire to skim the most destructive habits and practices off the top of digital addiction. But the business model is addiction, right? I mean, money is directly correlated to the amount of time people spend on the site," said Senator Sasse.

In reply, Facebook's Bickert pointed to her company's purported focus on "meaningful" social interactions, which she claims has reduced the amount of time people spend on Facebook. She also noted that the teams specifically tasked with dealing with problems like harassment, hate speech and disinformation are focused on reducing the prevalence of problematic content. Twitter's Culbertson said Twitter's ranking algorithm helps people spend less time on Twitter because it helps cut down on screen time. Senator Sasse, noting the replies from Bickert and Culbertson, said "we're hearing responses that are only around the margins," and turned to YouTube's Veitch, who pointed out users can set timers to reduce the time they sink into watching videos. She added that YouTube has sent over a billion "take a break" emails to encourage its addled users to log off.

As the hearing progressed, a litany of other concerns were raised. The problem of Facebook's discriminatory advertising practices, on which Bickert said the company has "made a number of improvements." The problem of disinformation related to COVID-19. The problem of children's exposure to problematic content on YouTube, and the collection of data on children in violation of COPPA (which Veitch dismissed as the result of a "novel interpretation" of the law). The problem of political disinformation inciting violence.

Indeed, Senator Richard Blumenthal (D-CT) queried Facebook's Bickert about measures the platform takes to reduce hate, division and violence. Pointing to a blog post Bickert wrote ahead of the verdict in the trial of police officer Derek Chauvin, who murdered George Floyd, Blumenthal asked "if Facebook does in fact have a dial for hateful content, can the company dial it down now, why doesn't it dial it down already?"

Noting there are costs and benefits to so-called "break the glass" measures that are intended to prevent behaviors that may contribute to violence, Bickert replied that "those measures aren't perfect. So there will be content that actually doesn't violate our policies that was flagged by our technology that really shouldn't be reduced. So when we take those measures we're mindful of the cost- it's always this balance between trying to stop abuse and trying to make sure that we're providing space for free expression and being very fair."

In other words, Facebook cannot engineer a system that prevents violence without reducing free expression. This is precisely the dilemma an internal report made public by BuzzFeed News said resulted in Facebook's failure to take action to curb the growth of the network that contributed to the insurrection at the US Capitol. The report said the company lacks the policies and tools to make the determinations necessary to stop such phenomena, and where it can do so the interventions are often manual.

"Cobbled together across products, our new media ecosystem is the networked terrain for a hybrid information war that ultimately enables dangerous groups to organize violent events—like the nationalists, militias, white supremacists, conspiracists, anti-vaccination groups, and others who collaborated under the banner of Stop The Steal in order to breach the Capitol," wrote Dr. Donovan in her written testimony ahead of the hearing. But to listen to the representatives of Facebook, YouTube and Twitter who testified today, the behavior of such groups on their platforms is statistically insignificant in comparison to the rich dialogue, entertaining videos and meaningful connections we all maintain on social media.

If this hearing did anything to advance the dialogue, it may simply be that it proved Senator Sasse's point: the positions of Big Tech and its critics are indeed irreconcilable. That puts the focus on lawmakers, whose duty it is to reconcile the interests of society and democracy with the business interests of the platforms. Just as Facebook must do a cost benefit analysis when it decides to what extent it should reduce free expression on its platform to reduce potential violence, lawmakers must decide how to sum this gnarly equation.

But one thing seems clear- as Dr. Donovan put it: "The cost of doing nothing is democracy’s end."

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics