Home

The Supreme Court Should Handle Cases Concerning Algorithms Based on Technical Specificity and Nuance

Sahar Massachi / Feb 19, 2023

Sahar Massachi is the co-founder and Executive Director of the Integrity Institute.

The Supreme Court in Washington, D.C.

For years now, people in government and civil society have been talking about fixing the internet without actually talking to the workers on the ground – tech professionals whose job it is to fix the social platforms that employ them. The latest example is the discourse around the Gonzalez v. Google case before the Supreme Court, which hinges on whether social media companies can be held liable for recommending harmful content to their users. In this case, justices must rule on legal arguments that rest on a combination of technical details, legal interpretation, and weighing different values. The Court’s typical expertise and orientation is towards the latter two, but in cases such as this the actual technical details matter.

The integrity professionals that make up the Integrity Institute, which I co-founded, are tech workers at social media and internet platforms with experience in roles dedicated to fixing harms to people and society. We have observed, and often helped build, the architecture of the social internet at these companies. Independent of company funding, we are honest brokers on technical matters in platform regulation and governance. It is in this spirit that we banded together to write an amicus brief – in support of neither party – that offers an independent explanation on the technology under question in Gonzalez v. Google.

As we lay out in our amicus brief, we are decidedly neutral on the merits of who should win in this case. Frankly, our members disagree with each other. What we all agree on, however, is that the Court should have an accurate understanding of how the technology it is evaluating operates. As legal experts have made clear, the Court’s ruling could have broad and unintended implications if not carefully considered.

The technology in question in the Gonzalez v. Google case is what people colloquially call “algorithms.” This colloquial term is not helpful because it is imprecise and risks leading the Court to rule far too broadly. In our brief, we strongly urge the Court to decide cases such as Gonzalez v. Google based on the specific algorithm in question. From the perspective of integrity professionals, here are the most important things to understand about algorithms.

First, pretty much any chunk of computer code is implementing an algorithm, so it is important in any legal case – especially one of this magnitude – to be precise about exactly what kind of algorithms we are talking about. The algorithms in Gonzalez v. Google are specifically the systems Google built internally to rank or recommend third-party content to users. Ranking systems are complex, powerful, and used for many purposes. Understanding them can feel intimidating, especially when legal arguments are involved.

Ranking systems are actually quite easy to understand at a basic level: they are a complex system optimizing recommendations by maximizing an often conceptually simple metric. If you understand what they are being optimized for, you already understand a great deal about their behavior and impact. In our brief and on our website, we’ve laid out how to understand – and inspect – ranking systems. If we had mechanisms to ensure companies like Google have well-thought out transparency about their ranking systems, we could easily understand whether the circumstances in Gonzalez v. Google were a one-off tragedy, or part of a pattern of behavior driven by the underlying technology.

Lastly, companies do use ranking systems for many purposes – including good ones like content moderation – but most choose to optimize primarily for things that aren’t necessarily great: user engagement and overall user growth. Like ClickHole, these algorithms have a simple creed: all content deserves to go viral. Optimizing for engagement creates a negative trade-off: you are then not optimizing for anything else like quality and lack of harm. Empirically, content more likely to be “bad” (e.g., violating any reasonable community standards on any platform) is also overall more likely to be engaged on. Mark Zuckerberg himself noted the peril of optimizing primarily for engagement, illustrated by the red line in an infamous graph from 2018. Human nature has not changed in the years since.

Once you know these things, it should become clear that ranking systems are not good or bad – they simply promote and demote both good and bad content based on what they are optimized to do. And, frankly, they are everywhere. Understanding what they do is important. Discussions about algorithms in Gonzalez v. Google often make them out like black boxes. While the technical ways companies optimize their ranking systems are indeed complex, what they are optimized for is not – you just need transparent documentation from those companies to see the trade-offs they make. Lacking such systematic transparency from the platforms, the Court should avoid a ruling in Gonzalez v. Google that would apply to recommendation and targeting algorithms writ large. (And someone – the companies, Congress, whomever – should make that transparency happen.)

Integrity workers are hard at work, right now, in all companies including Google. Their job titles or projects might have different names: responsible design, anti-abuse, trust and safety, community standards, or even integrity. But the work they do is similar: try to both protect people from abuse of the system overall, and to change the system so that it optimizes less on engagement and growth and more towards what they might call quality. Ensuring quality can mean different things, but it at the very least means “try to not spread terrorist propaganda.”

Overall, though, we must demand that companies do two things. First, make transparent the systems, optimizations, and design choices we see affecting the product we are fixing. Second: place integrity work front and center. Companies must empower integrity professionals to do their jobs – including being able to shift the optimization of products away from short-term company goals, and more towards helping individuals, societies, and democracies thrive.

Authors

Sahar Massachi
Sahar was born in Israel to two refugees from Iran, and grew up in Rochester, New York. During a four year stint at Facebook, he worked on the civic integrity team, which protected elections and deepened civic engagement worldwide. Before that, he ran the data for fundraising at Wikipedia, and found...

Topics