Home

Is A Tech Company Ever Neutral? Cloudflare’s Latest Controversy Shows Why The Answer is No.

Jenna Ruddock, April Glaser / Sep 16, 2022

Jenna Ruddock is a Research Fellow with the Technology and Social Change Project and April Glaser is a Senior Internet Policy Fellow at the Shorenstein Center at the Harvard Kennedy School.

It’s Time to Scrutinize Internet Infrastructure Policy

Infrastructure rarely makes headlines – until it fails. Internet infrastructure is no exception. But last month, Cloudflare – a popular internet infrastructure company providing a range of services, from domain name support to cybersecurity and content delivery – was reluctantly dragged into the spotlight (again). The problem wasn’t a busted pipe or a cyberattack that targeted its network or clients, but rather that Cloudflare was continuing to protect one of its client websites despite overwhelming evidence of persistent online and offline harassment and abuse perpetrated by the site’s community of users. (The particular website in question will not be named by this article in order to avoid harassment or directing readers to its content.)

Flag. Block. Suspend. Demonetize. Most of us are familiar with the range of tactics that major social media platforms use to moderate content online – and the confusion and challenges that have resulted from erratic efforts to referee user-generated content at scale. The Facebooks and YouTubes of the internet have proven less-than-effective at preventing the online communities they host from engaging in damaging behavior, including incitement to violence. The prospect of internet infrastructure companies that aren’t directly in the social media business making decisions about what is and isn’t acceptable to keep online is even more fraught.

But the stakes are just as high: consider Cloudflare’s decision in 2019 to stop providing services to 8chan – a website well-known for violent extremism and explicitly white supremacist content. That year, three mass shooters posted their hate-filled manifestos to 8chan before opening fire. Seventy-five people died from those shootings, with 141 casualties in total. Even in the immediate aftermath of the third attack – in El Paso, Texas – Cloudflare initially said it would not stop providing services to 8chan. Hours later, following public outrage and bad press, Cloudflare terminated its technical support for the site.

So how should we think about online infrastructure companies and their responsibilities to address harms perpetrated by websites using their services?

Social media sites that are in the business of encouraging people to post content have more targeted tools available to moderate that content – like flagging or removing a problematic post or banning an individual’s page. But companies that provide internet infrastructure services like web hosting or domain name services typically have far less granular options at their disposal. They are often limited to blunt actions like removing full web pages or blocking entire domains. Governments are increasingly turning to infrastructure providers like ISPs in an effort to disrupt internet access for entire regions during times of unrest.

For those who would prefer to see a company like Cloudflare stay out of the content moderation game entirely – well, that ship has sailed.

For those who would prefer to see a company like Cloudflare stay out of the content moderation game entirely––well, that ship has sailed. Up and down “the stack,” internet infrastructure services have repeatedly made unilateral decisions to drop entire websites – Cloudflare is hardly alone. When Cloudflare dropped the neo-Nazi website Daily Stormer in 2017, so too did Google, which was the site’s domain registrar, and GoDaddy, the site’s web host. Largely shielded from public view, though, these decisions rarely make headlines unless they are prompted by sustained public outcry. And it’s rare for internet infrastructure companies to proactively cite clear pre-existing guidelines or policies when they do take action in these cases. The result: a record of ad hoc and reactive decision-making that is often so opaque and inconsistent it’s difficult for anyone outside the companies to imagine better solutions to these thorny policy questions.

In a recent blog post, Cloudflare’s leadership offered what some found to be a compelling analogy in defense of the company’s severe reluctance, and at times outright refusal, to part ways with websites boasting long track records of harm. In its role as a website security services provider, the company argues, Cloudflare is much like a fire department. As a result, refusing to provide its services to a website based on the contents of that website would be tantamount to refusing to respond to a fire because the home belonged to someone lacking “sufficient moral character.”

Without wading too deep into this specific analogy, there are two glaring issues with comparing most internet infrastructure providers to any public service that is rooted in the community it serves. The first and most obvious issue is that the vast majority of internet infrastructure providers are for-profit corporations that aren’t subject to any comparable regimes of public oversight and accountability. While these internet infrastructure companies might fairly position themselves and their services as valuable and even integral to the internet as a whole, their most concrete obligations are ultimately to their paying clients and, above all, their owners or shareholders.

Often the provision of infrastructure services is positioned as a neutral default – only the denial of those services is framed as a political choice.

But the second, more nuanced, distinction has to do with how we identify the rights and the harms at play. Often the provision of infrastructure services is positioned as a neutral default – only the denial of those services is framed as a political choice. Or in other words: denying services to websites or forums that promote or have been directly tied to violence has been readily framed as a potential denial of rights and thus an affront to the “free and open internet.” But when a company opts to continue to provide services even with hard evidence that a site is being used to promote hate and abuse, it is largely not treated as a threat to the overall health of the internet in the same way. As noted by legal scholar Danielle Citron, however, online abuse itself “imperils free expression” – particularly silencing “women, minorities, and political dissenters,” who are disproportionately targeted online.

Infrastructure companies themselves have championed this idea of neutrality, and absent support from law enforcement or the courts, cries for action from targeted individuals and communities are too often reduced to subjective disagreements over content or politics. Cloudflare’s analogy here supplies just one example: not providing services to a website is compared to refusing to administer potentially life-saving emergency aid while the harms of persistent, targeted harassment are reduced to a judgment call about “moral character.” And while companies might lean on their willingness to act pursuant to legal process, shifting the burden entirely to the legal system fails to account for the reality that law enforcement agencies and the courts have an abysmal record of not only discounting harms reported by communities who are frequent targets of online abuse, but also causing additional harm in the process.

One frequently articulated concern is that refusing services to one bad actor is a “slippery slope” that leads to refusing services to anyone, including marginalized communities often targeted by forums like 8chan. So far, that has not been the case. While Cloudflare claims its high-profile decisions to terminate services for 8chan and the Daily Stormer led to “a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations,” it’s unclear whether any of these requests are reflected in the company’s transparency reports. Greater transparency is needed throughout the stack in order for a well-informed public conversation to be possible. But equally important is examining how and when “slippery slope” arguments are applied. Cloudflare claims that its latest takedown decision was made because escalating threats – over the course of a mere forty-eight hours – led the company to believe there was “an unprecedented emergency and immediate threat to human life.” The slope from “revolting content” to harassment, swatting, and mass shootings encouraged by hateful communities online seems awfully slippery, too.

Trying to untangle complex policy questions in moments of crisis is unworkable – but so is continuing to insist that there are any neutral players.

There are two things those concerned with creating a safe and flourishing digital world have learned watching the long and unending conversation on social media content moderation. For one, there are few, if any, easy answers. This is just as true for internet infrastructure services as it is for major social media platforms. And two, problems don’t fix themselves or go away – tech companies respond to public outcry and investigative journalism that makes them look bad. Trying to untangle complex policy questions in moments of crisis is unworkable – but so is continuing to insist that there are any neutral players.

There is no doubt that these horrific corners of the internet will persist – in some form, in some forum. There will always be places on the web where those determined to cause harm and perpetuate abuse can regroup and build new outposts. Combatting these harms clearly demands a whole-of-society approach – but internet infrastructure providers are as much a part of society, and the online ecosystem, as the rest of us. An honest, robust conversation about the real-world consequences of allowing communities of hate to grow online and the ways in which internet infrastructure companies enable them to do so is the only way towards an internet where diverse communities can safely create and thrive.

Authors

Jenna Ruddock
Jenna Ruddock is a Policy Counsel at Free Press and Free Press Action. Previously, she was a Research Fellow with the Technology and Social Change Project at Harvard Kennedy School’s Shorenstein Center and a Senior Researcher with the Tech, Law & Security Program at American University Washington Co...
April Glaser
April Glaser is a senior internet policy fellow at the Shorenstein Center at the Harvard Kennedy School. She previously worked as an investigative reporter at NBC News.

Topics