Home

Donate
Perspective

Livestreaming Platforms Must Demonstrate Their Safety Measures' Effectiveness

Dhanaraj Thakur / Nov 18, 2025

In the conversation about keeping children safe online, one heart-rending problem stands out: child sexual exploitation and abuse (CSEA) on livestreaming platforms, which persists around the globe. In October, a German man was charged on hundreds of counts for pushing children in various countries to harm themselves on video and create pornographic content. Earlier this year, a UK man was convicted of arranging livestreamed sexual abuse of children. And last year, a Florida man and his girlfriend and son were arrested for allegedly sexually abusing children on livestreaming sites, earning money from anonymous predators who paid to watch. The case’s appalled judge spoke out, saying, “For the past 25 years, I've seen just about everything, so to shock the court's conscience is frankly a difficult proposition at this point in the court's career."

These stories of horrific and illegal activities are unfortunately not unique, and are playing out across popular livestreaming platforms, where users — and particularly young people — gather online to generate and share content about gaming, sports, politics, music, and more. Perpetrators also use livestreaming services to groom or sextort children, or pay children directly or others to perform sexual acts with children live.

As a parent, the prevalence of CSEA on popular livestreaming platforms is alarming. As a researcher, I know livestreaming services are making efforts to combat CSEA on their platforms. While some measures sound promising, platforms should also do more to demonstrate whether those efforts are working.

Research by the Center for Democracy & Technology shows that platforms use three major approaches to address this problem. Many platforms employ design-based methods to restrict or limit access to livestreaming, such as conditioning the ability to livestream on follower count or parental permission for users under age sixteen. Platforms are increasingly using machine learning models to identify CSEA in livestreams, in response to the broad scale of user-generated content. But recognizing sexual content is difficult in the best circumstances. If it’s hard for the US Supreme Court to identify obscenity (“I know it when I see it”), imagine how hard it is to program a digital tool to make that distinction — especially when that tool isn’t analyzing a photograph or text, but a moving image produced in real time.

A third approach is to analyze the "signals" or context around content, such as metadata about whether an account’s activities violate a platform's CSEA policies. Is content being published by someone who has been warned about CSEA before? Do the participants on the livestream have a history of particularly unsavory viewing habits? These clues may be enough for a platform to investigate, throttle, or ban a particular stream, even without reviewing the content itself. Platforms could also share those signals with other platforms to prevent bad actors from moving from one service to another.

In theory, these are good approaches to a serious problem. But in reality, independent researchers, policymakers, and others simply don’t know whether or not they work, because platforms have not made that information available. I specialize in understanding the impacts of new technology, but still don't know how well these methods can keep my child safe, so how can other parents? We can look to other children’s products like strollers: parents aren’t left to trust that various safety features like straps and padding, which may sound great, actually work. Manufacturers themselves, independent organizations like Consumer Reports, or government agencies test these tools, and in many cases make the results publicly available.

In the case of livestreaming platforms, we have only companies’ statements that their safety features effectively prevent CSEA. Some of these claims are now the subject of lawsuits. In addition, the third-party vendors many livestreaming platforms rely on to help detect CSEA often make unverifiable claims about the efficacy of their machine learning products — bordering on AI snake oil.

How do we address these evidence gaps? In short, we need livestreaming platforms to share more about how effective their safety measures are, with the end goal of getting that information to parents.

To move in this direction, companies can support independent expert audits of the machine learning models they use to detect CSEA, adopt best practices for testing models, and publicly communicate test results. These steps could help independent experts better understand how CSEA detection tools compare, and help parents and their children make informed decisions about the safety of different livestreaming apps.

Information should be available about the efficacy of safety approaches in use and development for livestreaming services — as in many other aspects of our children's lives, where that information is already accessible.

Authors

Dhanaraj Thakur
Dhanaraj Thakur is Research Director at the Center for Democracy & Technology, where he leads research that advances human rights and civil liberties online. Over the last 15 years, he has designed and led research projects that have significantly informed tech policy and helped improve the way publ...

Related

Analysis
Considering the Risks of AI-Enabled ‘Smart Glasses’ in Livestreamed ViolenceSeptember 19, 2025
Perspective
Online Safety Depends on a Growing Trust and Safety Vendor EcosystemJuly 25, 2025
News
FTC Outlines Focus on Parental Rights at Child Online Safety WorkshopJune 4, 2025

Topics