Ben Lennett is a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy.
Ahead of oral arguments in Gonzalez v. Google, LLC at the Supreme Court next week, I sent a short questionnaire to gather perspectives and legal opinions of different organizations that filed briefs with the Court. It asked organizations for their perspective on the Gonzalez case, and the arguments by the Petitioner and the U.S. government that urged the Court to narrow Section 230 protections. I also asked for opinions on the Zeran v. AOL decision that largely shaped the U.S. courts’ interpretation of Section 230’s immunity protections.
Below are responses provided by Jolina Cuaresma, Senior Counsel, Privacy & Technology Policy at Common Sense Media. Read the Common Sense’s full amicus brief here.
What is your position generally on the merits of the Gonzalez case? Is Google liable if its algorithms recommend terrorist videos to users? Is it liable if it monetizes those same videos with ads?
Our position is that the plain meaning of Section 230 does not extend to when Google recommends particular users certain content. In other words, Google is not protected when it’s directing web traffic to particular posts.
A ruling in favor of the Petitioner does not make Google liable when its algorithms recommend terrorist videos to users. Such a ruling only means that Google cannot use Section 230 to get cases dismissed as a matter of routine. To be clear, the Petitioner still bears the burden of proving that Google violated the Antiterrorism Act.
Were the Supreme Court to use the plain meaning of each word in Section 230, then Google becomes like all other companies that compete in our market system. Virtually every corporation is expected to identify litigation risks and determine its risk tolerance. Why should Google be exempted from that?
If the court relies on the arguments in your brief to make its decision, how will it impact social media and the internet more broadly?
I doubt there will be an immediate impact. Each social media firm has to first figure out its tolerance for risk. In the fourth quarter of 2022 alone, Google’s parent company, Alphabet, reported a net income of over $13.9 billion. Google may very well keep things status quo if it determines it has a high tolerance for risk. Other companies may be more risk-averse, and they’ll have to innovate. After all, recommendations don’t have to be based on behavioral profiles that were developed using the billions of data points collected from users. Moreover, Google’s current form of algorithms is necessary (i.e., critical to its entire business model) only if Google and advertisers believe that without them, there’s no other way to keep users engaged on their platforms. If that’s the case, then the decision will spur innovation. Isn’t there that saying from Plato…necessity is the mother of invention?
Ben Lennett, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy, is an editor at Tech Policy Press. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology Institute at the New America Foundation and as policy expert providing analysis to foundations, governments, and other institutions.