Home

Rulings Suggest Tide is Turning on Online Child Exploitation

Gretchen Peters, Kathleen Miles / Aug 29, 2022

Gretchen Peters and Kathleen Miles are co-founders of the Alliance to Counter Crime Online.

Clad in the armor of legal precedent, reinforced by the best publicity and lawyers that money can buy, the tech industry has long seemed invincible. Now, for the first time in years, it feels like the tide is starting to turn.

Last month, a federal judge rejected Visa’s motion to dismiss a lawsuit brought by a child sexual abuse survivor who claims the credit card company enabled video of her abuse to be monetized on Pornhub, which is owned by MindGeek. She was thirteen at the time.

After years of ardent reporting by activist groups and journalists, “Plaintiff adequately alleges that Visa knew that MindGeek’s websites were teeming with monetized child porn,” wrote Judge Cormac Carney of the U.S. District Court of the Central District of California. Carney drew a clear line of connection:

At this early stage of the proceedings, before Plaintiff has had any discovery from which to derive Visa’s state of mind, the Court can comfortably infer that Visa intended to help MindGeek monetize child porn from the very fact that Visa continued to provide MindGeek the means to do so and knew MindGeek was indeed doing so. Put yet another way, Visa is not alleged to have simply created an incentive to commit a crime, it is alleged to have knowingly provided the tool used to complete a crime.

Following the ruling, Visa and Mastercard swiftly cut ties with Mindgeek, which has suffered an exodus of senior leadership after a damning New Yorker article in June exposed the company’s almost non-existent moderation practices.

A week earlier, an Oregon judge ruled that a separate lawsuit could proceed against Omegle, a platform that randomly matches users for video chat. The suit contends that Omegle’s landing page inviting users to “Talk to Strangers!” coupled with its lack of protections - such as pairing children with adults, not verifying users’ ages, and not monitoring or flagging abusive users – makes it a dangerous product. The minor plaintiff in that case was matched– when eleven years old– with a pedophile who abused her for years.

Importantly, both cases focused on liability for how the platforms are designed and moderated, not on the publishing of user-generated content – which is arguably protected under Section 230 of the 1996 Communications Decency Act.

For more than 20 years, U.S. courts extravagantly interpreted the 230 liability shield, even in cases where tech companies had been advised that illicit or otherwise harmful activity was occurring on their platforms.

It is useful to remember that when Section 230 was adopted in 1996, only about 20 million Americans had access to the Internet at all. Most connected via telephone dialup on desktop computers. Smartphones, social media and algorithmic amplification hadn’t even been invented yet. Mark Zuckerberg was just 12 years old, nearly the same age as the Omegle and Visa plaintiffs.

More than a quarter century later, judges are beginning to realize that the important protections for free expression that 230 provides should not extend to platform design that causes or amplifies harm. The Ninth Circuit began this trend last year when it ruled that Snap could be sued for the negligent design of a feature on its app that contributed to the deaths of two teenage boys, signaling a sea-change in how courts understand tech platforms operate.

Let’s be clear: these rulings do not mean that tech companies are finally being held to account for the harms they enable. Plaintiffs’ attorneys in both the Omegle and Visa lawsuits still need to argue their cases, after all. These rulings simply give victims the opportunity to have their cases heard. After so many years of victims’ cases getting dismissed on 230 grounds, simply getting a day in court feels a victory in and of itself.

Legal reform is also on the horizon, with lawmakers in multiple parts of the world endeavoring to define the crucial distinction between free expression and exploitation. The Digital Services Act agreed upon by the European Union in April will impose a duty of care on tech platforms to restrict and remove organized crime activity on their systems, as would the UK Online Safety Bill, which is under debate. Multiple bills have been introduced to the U.S. Congress, with some, like the Online Consumer Protection Act, specifically inserting a duty of care for tech companies to enforce their own community standards. Bipartisan lawmakers in California have introduced legislation that would mandate tech companies to consider the privacy and protection of children in the design of any digital product they release.

Make no mistake, the problem of online exploitation remains as vast and deep as the ocean. Social media algorithms continue to facilitate and amplify an immense amount of harm – from spreading child sex abuse content, disinformation and extremism, to connecting fentanyl peddlers to teens.

But for the first time in years, it’s starting to feel like that won’t always be the case. We welcome the change.

Authors

Gretchen Peters
Gretchen Peters is Executive Director of the Alliance to Counter Crime Online. She also conducts complex research and investigations into organized crime, fraud and corruption.
Kathleen Miles
Kathleen Miles is a leading authority on transnational organized crime networks in sub-Saharan Africa. She has developed and provided on-the-job training to analysts across the continent, instructing them in complex link and network analysis. Previously, Kathleen consulted to the U.S. intelligence, ...

Topics