Home

Section 230 Rightly Protects Voice, but Allowing Platforms to Use Amplification Algorithms With Total Impunity Does Harm

José Marichal / Feb 27, 2023

José Marichal is a professor of political science at California Lutheran University.

This week, the U.S. Supreme Court heard arguments in Gonzalez v. Googleand Twitter v. Taamneh. At issue in these cases is whether the tech companies aided terrorists by hosting ISIS propaganda videos. But the key question that is before the Court in Gonzalez v. Google is whether Google acted as a publisher by algorithmically recommending ISIS videos, and thus whether this action requires a revisiting of the Section 230 provision of the 1996 Telecommunications Act.

The original intent of the law was to protect online platforms from liability for third party content during the burgeoning days of the Internet. Legislators rightly feared that allowing emerging platforms like AOL to be sued for any content they host would produce an unwelcome chilling effect, since platforms were (and remain) incapable of effectively monitoring all user generated content.

But seldom do we ever ask the question “what is voice for”?

Section 230 and Democratic Discourse

Indeed, Section 230 rightly extended the voice of billions of people. It is undeniable that social media has expanded the range and diversity of perspectives in our public sphere. The expansion of voice, what Zizi Papcharessi calls the formation of “affective publics,” has given people from groups that had been largely kept out of public conversation by media gatekeepers a platform to be seen. The plurality of world views on social media challenges the neutrality of the “marketplace of ideas'' by forcing people to confront the perspectives of the systematically marginalized. Political theorist Iris Marion Young argued that true justice and fairness cannot be achieved without an acknowledgement of how social rules and norms constrain some groups of people and empower others. Social media has made those instances much more apparent.

Effective democratic discourse, however, requires more than the expression of voice. Voice doesn’t exist in an atomized vacuum. If we seek to be in the political community, we are obligated to make space for a plurality of views. If we want that political community to be healthy, where citizens “gain wisdom” and “live well,” then we need to attend not only to the expression of voice, but to how we listen to the voices of others.

University of Chicago philosopher Agnes Callard argues for the importance of “intellectual fighting” in the development of persons. Rather than solely causing discomfort, fighting to defend a position, she argues, equips you with a “sense of potentiality” (e.g. what am I capable of becoming). Arguing and occasionally “losing” also provides us with a useful humility that reminds us about the ambiguity and uncertainty of existence. We need the “voice” that Section 230 protects to “account for ourselves” in the public sphere (as Hannah Arendt encouraged) but we also need the humility to bring that voice into community openly, with the possibility that we can develop ourselves by listening to the voices of others.

The Algorithm and the Rules of Engagement

To produce this kind of discourse is a difficult ask for any democracy in any age, but algorithms designed for optimization makes this hard task nearly impossible. Recommendation algorithms replace our native efforts to “find content” with “more of what we want,” however an algorithm perceives that. To borrow from critical theory, we’re always already being steered in directions towards which the algorithm wants us to go. Where it wants us to go may be unclear, but what is clear is that it is optimized to want us to “keep driving” (e.g. stay on the platform). If Callard is right and we need to occasionally defend our position to “develop our potentiality” and grow into more mature, developed selves, then we must ask whether optimization for engagement provides opportunities to “fight well”?

It might seem unusual to argue for any kind of “fighting” on social media platforms. Indeed, the plaintiff’s arguments in Google v. Gonzales is that Youtube/Google isn’t acting as a neutral platform but using its optimization algorithms to “publish” unwanted inflammatory content, hence encouraging fighting. Isn’t the very problem at issue in Gonzales vs. Google that Google is pointing its computational firepower towards presenting users with unsolicited thumbnails that may help radicalize users to join ISIS? Doesn't this engender a “sense of potentiality”? As such, we should encourage less fighting and more understanding.

But fighting itself isn't the problem. The problem is fighting without rules of engagement. In 1867, to endear boxing to the London aristocracy, the Marquis de Queensbury established rules for the sport that included a number of measures intended to “civilize” it: indeed to turn it into a craft of sorts. These included wearing padded gloves, the three minute round, the elimination of wrestling and the 10 second count rule for knockouts. While the ethics of boxing can be argued, the rules were largely successful in providing more humane ground rules for the publicly sought after display of aggressive skill.

Algorithmic optimization in a social media environment produces discursive fighting without anything like the Queensberry rules. When we are algorithmically steered towards outrageous and incendiary ideas, we reflexively seek to argue the absurdity of the point. But does this arguing make us better people (or even make us better at arguing)? Often we get trapped into pointless arguments where we fight with random strangers online without a clear sense as to why we do so and without any ground rules. To argue with a random person on social media over an algorithmically served topic is to be blindfolded, throwing punches in a darkened room. It often turns the effort to craft, sharpen and refine arguments into an exercise in absurdism. Like Gregor Samsa’s transformation into an insect in Kafka’s Metamorphosis, we become something different, stilted and deeply undemocratic.

It is difficult for “intellectual fighting” to serve any redeeming purpose when optimization algorithms are presenting you with “engaging” content (e.g. a Congressman’s Christmas card with his family all holding AR-15’s). If you “fight” this post, who are you fighting? How do you know if you are winning? Although I might fear a broken nose if I fight someone face to face, I can process the contextual cues and gain a sense of why the fight is happening and what I need to do to get out of it. Online, I lack these cues. If those with whom I am not ideologically congruent take a tweet of mine and make it “go viral” (the dreaded Twitter “dunk”), the result feels emotionally less like a fight and more like being jumped in an alley. Even if the immediate danger to my physical self is reduced, psychologically I can't anticipate where I'm going to be hit from– or how– I'm going to be hit, or who's judging or even why we are fighting in the first place.

In this case, fighting loses any sense of redeeming value. I'm not learning “compassion for the stranger” or “how to die well” or any other valuable skills. I may be learning about life’s contingency or unfairness, or a more general lesson about the futility of posting my opinions on social media. The result becomes a meaningless discourse environment where our voices have become removed from the embodied selves that produced them and repackaged as atomized packets of “engagement.” The “voice” that Section 230 was intended to defend becomes grist for the algorithmic mill.

The Futility of Confronting Algorithmic Aggression

For historically marginalized groups, this futile exercise in “fighting” is even more taxing since racist and misogynistic discourse is often mischaracterized by engagement algorithms as “must see.” Journalist Ashlee Marie Preston refers to this as algorithmic aggression, or the constant obligation to respond to racist posts (something Justice Sotomayor touched upon during oral arguments). An additional problem that this aggression poses is that communities of color and other marginalized groups often are not granted the same type of arguments on social media that would engender a “sense of potentiality.”

Marginalized groups are often forced to use social media platforms to demand inclusion and recognition. While much of that engagement consists of expressions of joy (and every other human emotion), some online engagement consists of the taxing effort of insisting on an epistemic closure regarding debates about the validity and legitimacy of Black, indigenous and queer experiences. That this valid claim for closure is contested is much of what gets debated on social media platforms under the guise of “free speech” and “cancel culture.” The importance of conjecture and refutation is critical for an open society, but can become exhausting and demeaning when one is expected to constantly defend their basis for equal treatment or inclusion.

Besides, one might ask, “what benefit does “fighting” over one’s legitimate place in society produce”? Even if opposition is not coming from a bad faith perspective or from the perspective of maintaining superiority, but from a genuine curiosity or ontological difference, it still seems to disproportionately fall on the oppressed and marginalized to “account for themselves” rather than to engage in arguments that are about discovering potentiality. The value of argumentation is turned into a psychological cost and democratic deliberation is inhibited by this inequity.

In my own 2012 book, Facebook Democracy, I argued that Facebook changed the nature of public conversation by disincentivizing discourse rooted in conjecture and refutation and encouraging talk based on “connection” or “identity maintenance.” Put another way, it encouraged discourse designed to “find your tribe” over discourse designed to “seek to understand the other.” In the ensuing decade, social media platforms have become better at using data and algorithmic optimization to keep users on the platform. They have done so by affording the promise of a self-curated world where users craft those things which interest them. When they do encounter diverse viewpoints, they do so on their personal terms: as red-herring foils that reinforce their personally curated world-view. Rather than “intellectual fighting,” engagements with public life are exercises in personal intellectual self-gratification.

Deliberation, or Burn and Destroy?

In The Human Condition, Hannah Arendt lamented that the increasing complexity of the modern world would motivate a retreat to the self, more specifically a retreat to the world of labor (what we do to survive) and work (what we make/do in the commercial world). This would come at the expense of what she called the vita activa - the world of public action, where we used speech to present ourselves to others. This is the world of discussion and persuasion (e.g. the world of fighting) over our shared human existence.

Social media and algorithmic optimization distorts the vita activa by changing the nature of our “world in common.” Instead of achieving “great deeds” through public life by “making an accounting of yourself” in a way that persuades others, “speech action” becomes performative, an exercise in signaling your loyalty to like minded others. The creativity of the vita activa becomes oriented around the ways in which you can “burn” or “destroy” or others, either through snark or outrage. Words become the world of labor and work, mere objects used to produce more words which are ultimately used to encourage us to consume more goods, rather than to deliberate over our ‘world in common.’

Oral arguments in both cases seem to suggest a narrow ruling that would likely leave Section 230 little changed. But other Section 230 cases will eventually be heard by the Court, and the issue of how recommendation algorithms affect the nature of our public discourse remains. Ultimately, the courts will likely leave it to Congress to update the 1996 law. It behooves all of us for Congress to take up this task and to think seriously about the ways in which we “constructively fight” are intimately tied to the ways in which algorithms are optimized. While courts may find that algorithms can’t place undue burdens on speech, Congress must find creative ways to incentivize platforms to focus more on the “rules of engagement,” not just “protecting speech.”

Authors

José Marichal
Dr. Marichal is a professor of political science at California Lutheran University. He specializes in studying the role that social media plays in restructuring political behavior and institutions. In 2012, he published Facebook Democracy (Routledge Press) which looks at the role that the popular so...

Topics