Home

Donate

New EU Privacy Rule May Complicate Moderation of Child Sexual Abuse Material

Maggie Engler / Jan 4, 2021

Last month, Pornhub, the largest pornography site on the Internet, announced major changes to content moderation on the platform, including banning downloads, restricting uploads to verified users, and expanding the company’s efforts to proactively find and take down abusive content. The moves followed sustained pressure from non-consensual imagery activists, 2018 legislation that eliminated platforms’ liability protection for sex-trafficking-related content, and a recent New York Times report on victims, whose assaults were posted on the site for the world to see. They represent a significant shift from the way that Pornhub has operated for years as one of the most-visited websites in the world.

While Pornhub is making steps in the right direction, the problem of sexual abuse material, including child sexual abuse material (CSAM) continues to plague not only pornography sites, but major platforms as well. Carrie Goldberg, a prominent victims’ rights lawyer who specializes in digital sexual harassment cases, tweeted that “For every 1 case involving a rape tape on Pornhub, I have 50 involving rape and CSAM being disseminated on Insta and FB.” In recognition of the severity of this issue, Facebook, Instagram, and other major social media platforms have built out mature functions for detecting CSAM, the same types of systems that Pornhub is committing to implement. Now, however, a proposed rule under the European Union’s ePrivacy Directive threatens to remove the sharpest arrow in the quiver against online sexual abuse material through restrictions on the “monitoring of email, messaging apps, and other digital services” in the EU.

A proposed rule under the European Union’s ePrivacy Directive threatens to remove the sharpest arrow in the quiver against online sexual abuse material.

To understand the debate, it’s important to understand both the motivations behind the ePrivacy Directive and the implementations of current content moderation processes. The European Union is and has been the global leader in defining data privacy rights; the creation of the behemoth General Data Protection Regulation (GDPR) changed data collection, usage, and storage practices for commercial entities around the world. The current ePrivacy Directive acts as lex specialis to GDPR, providing specific guidance on how to comply with principles outlined in the earlier law. Specifically, it says that any monitoring of personal communications without a court order is in violation of the privacy rights of Europeans. As written, the ban on monitoring precludes any type of moderation.

It should be noted that searching for a child sexual abuse material is distinct in several ways from other types of monitoring. For some types of content violations, such as hate speech, both policies and mitigation approaches across different platforms may differ substantially. Non-consensual pornogaphy and especially CSAM are unique content violations in that there is widespread agreement on what constitutes it, as well as a great deal of collaboration across technology companies, as well as law enforcement and nonprofit organizations such as the National Center for Missing and Exploited Children (NCMEC). This level of consistency and cooperation among these entities is unique in the content moderation space. Notably, it has enabled the maintenance of an active clearinghouse of known child sexual abuse material, which organizations can leverage to immediately detect and delete any content already identified as CSAM by another organization.

Typically, content violations are identified through either a machine learning algorithm that predicts whether the text or image is likely to be objectionable; through human review, such as after a post is reported by another user; or some combination of the above. In fact, a comparable tool in the same domain is grooming software, built to try to identify conversations that include sexual grooming of a minor. With tens of thousands of examples of such exchanges, a model can pick up common patterns and language, but will likely never be perfect—there’s too much that depends on context the model doesn’t have. In contrast, the technology that searches for CSAM is nearly 100% precise. The clearinghouse of CSAM contains millions of photos, videos, and materials, and millions more are added each year through reports from platforms and law enforcement. For each photo, a hash—a mathematical transformation on the data—is generated, and platforms can check any future uploads for matching hashes. In this case, since the image hashing algorithms are robust, similar images will generate similar hashes, so the detection would also capture photos that differ only in tiny ways from reported CSAM.

Hashing functions are one-way functions, meaning that one can always compute the hash given a value, but can’t retrieve that value given only its hash. It’s like using a phonebook—if you have someone’s name, you can flip to that page to find their number, but if you’re looking for the owner of a particular phone number, you’d have to just start reading and hope you get lucky. The most well-known application of hashes is with passwords. It’s risky to store the actual text of a password, in case of a data breach, so instead, applications typically store a hashed version. When the user enters their password at login, the application performs the same transformation and compares it to the hash they have saved for that user, verifying that the password is correct if the hashes match.

The upshot of this characteristic is that in the standard implementation of CSAM scanning, no pictures need to be viewed or stored in their original form by platforms, and the only ones that would be investigated are those that match with high confidence known CSAM material. To reiterate, the existing tools approach perfect precision on this task. So, in contrast to other content moderation methods—including machine learning models and human review—that require access to all original content, scanning and image hashing, the predominant means that CSAM is found online, preserves the privacy of its users. With the broad language used in the ePrivacy Directive, though, Facebook and its peers may stop scanning images altogether, leaving populations more vulnerable to digital sexual harassment and abuse.

Certainly, privacy protections should concern the members of the European Commission, especially if users are unaware that their communications are swept into data collection by technology companies. But there’s a huge difference between using the text a user has written in hundreds of emails to target them with advertising and comparing image hashes to a hash corpus of sexual abuse material. The latter is more privacy-preserving than any other moderation approach, not to mention extremely accurate and effective.

It’s the privacy of child victims that will be most impacted by a change that defangs the organizations already working together to limit the spread of child sexual abuse material online.

In the future, it would be great to see the development of automated moderation with such privacy guarantees applied to other types of content, and recent work on differentially private machine learning shows promise in that respect. But in the present, the EU simply cannot allow for CSAM tools to be categorized in the same way as the former example of text mining for behavioral insights. It’s too big a problem, with too horrific of consequences for any reduction in detection and remediation. After all, for all the Directive’s reverence for the right to privacy, it’s the privacy of child victims that will be most impacted by a change that defangs the organizations already working together to limit the spread of child sexual abuse material online.

Authors

Maggie Engler
Maggie Engler fights platform manipulation at Twitter. Previously, Engler led data science development at Global Disinformation Index and worked on authentication and modeling user behavior at Duo Labs.

Topics