Home

Donate
Analysis

How Congress Could Stifle The Onslaught of AI-Generated Child Sexual Abuse Material

Jasmine Mithani / Sep 25, 2025

Jasmine Mithani is a fellow at Tech Policy Press.

Advances in generative artificial intelligence models have birthed an overwhelming crisis of synthetic child sexual abuse material (CSAM). There are now AI-generated child sexual abuse videos, not just still images, according to the Internet Watch Foundation, and the media is becoming more realistic and more graphic.

Many generated images of CSAM are created using open-source models that contained images of real child abuse in their training data. But models marketed to the general public are also at risk for creating images of CSAM, potentially inadvertently.

New research from Stanford recommends one way the United States Congress could move the needle on model safety: Allow tech companies to rigorously test their generative models without fear of prosecution. In other words, federal authorities could create a safe harbor for red teaming models for child sexual abuse material.

“Generative AI models offered by major AI companies are used by tens of millions of people every day, and we should encourage them to make their models as safe as they possibly can for the general public to use,” said Riana Pfefferkorn, policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence and co-author of July’s report on AI-generated CSAM.

Red teaming is a way to stress-test AI systems by simulating an adversarial actor. One way models can be red teamed for CSAM is by testing prompting edge cases — but that might then facilitate the creation of illegal images.

In the US, there are no federal protections for companies or researchers who want to test models to see if they can create child sexual abuse material. While there are some nuances, AI-generated CSAM is generally illegal.

A solvable gap in evaluating models for child safety

Thorn, a nonprofit that creates technology to defend children from sexual abuse and exploitation at scale, suggests that if a model can create an image of a child and an image of a nude adult, it is capable of generating sexual pictures of children — a concept known as model compositionality. But it’s not clear if removing those two factors would fully stifle models from generating CSAM, and ongoing research suggests it might not be.

Elissa Redmiles, an assistant professor of computer science at Georgetown University, is part of an international research team that has been investigating the limits of compositionality. “Preliminary results show that in a model where we’ve automatically filtered out a majority of training data (~94 percent) for a particular concept (e.g. image of children) we can still produce images of a composed concept: a child wearing glasses,” she told Tech Policy Press.

In other words, cleaning training data might not be enough to hinder a model from creating CSAM.

“The people who are charged with red teaming are doing it with one hand tied behind their backs, and they know that adversarial actors aren't handicapping themselves that same way,” said Pfefferkorn. “That's where you end up with this uncomfortable position that AI companies are in, where they may sincerely want to make their models better but they face legal risk in doing so.”

Some states, like Arkansas, have included a carve-out for adversarial testing in laws concerns possession of child sexual abuse material. But these don’t provide protection from stricter federal laws.

The most impactful action the federal government could take would be passing tightly-scoped safe harbor legislation so tech companies could red team their models against CSAM without fear of prosecution, Pfefferkorn said.

The Department of Justice (DOJ) could also provide a public “comfort letter” protecting red teaming. It has happened before — in 2022, a new policy excluded “good-faith security research” from being charged under the Computer Fraud and Abuse Act. Previously, cybersecurity researchers and white-hat hackers were disincentivized from reporting vulnerabilities because they could be prosecuted for cyber crimes.

Pfferkorn said an executive order could also be a solution, but like a DOJ policy, it risks rescindment at any time. For instance, President Donald Trump revoked President Joe Biden’s executive orders on artificial intelligence within days of assuming office. A report commissioned by one of those Biden orders detailed ways to reduce CSAM in generative artificial intelligence and called for more research into red teaming strategies. It also noted the legal barriers of red teaming for CSAM.

Individual companies could come to private agreements with the DOJ, but Pfefferkorn pointed out that this solution does not scale. Right now, the only group authorized by the federal government to handle CSAM is the National Center for Missing and Exploited Children, or NCMEC. (NCMEC did not respond to multiple interview requests.)

Having to coordinate with an outside partner, instead of following a typical iterative development process, is “probably one of the most unique challenges to this work,” said David Rust-Smith, senior staff machine learning engineer at Thorn.

“It limits the velocity at which we can do some of this work, and it means we do have to rely heavily on strong relationships with partners,” he said. If Thorn’s models aren’t performing as expected, the engineering team needs to wait for feedback from partners authorized to store and review CSAM, which slows down the process.

Thorn is full of subject matter experts who stay abreast of tactics used by offenders to produce more CSAM. Rust-Smith acknowledged they would do more red teaming or model-specific research themselves if possible, “but we don't, because there's so much uncertainty around which parts of this are risky and which parts aren't, from a legal standpoint.”

There are some risks in creating a safe harbor for red teaming CSAM; there is fear that adversarial actors could pose as legitimate trust and safety workers in order to exploit protections for CSAM generation.

One way to combat this would be by only extending immunity for red teaming to someone in the employment of a particular company, or who has a contract with them, Pfefferkorn said.

It’s unclear what tech companies are doing right now, as few talk about red teaming for AI-generated CSAM publicly, possibly fearing legal ramifications. Several companies, including CivitAI, Meta, and Microsoft, committed to following Thorn’s Safety by Design principles in 2024, which includes red teaming for AI-generated CSAM. The nonprofit marked red teaming as one of the most significant mitigation techniques AI developers could deploy.

Anthropic shared a blog post about its commitment to Safety by Design, and said it is working on conducting red teaming for AI-generated CSAM, though did not detail how it approached the matter. A spokesperson for Anthropic said in-house experts red team for child sexual exploitation material, and continue to assess model capabilities.

The recipe for effective policy

Right now, the Trump administration is rallying around both AI deregulation and increasing child safety. Pfefferkorn has seen these two forces come into conflict this year on Capitol Hill.

“What I've seen is this dynamic of, if we make red teaming safer for AI companies, we're just helping Big Tech, and Big Tech ought to be punished, not helped,” she said. “That's in direct tension with the whole ‘we need to cast off any regulatory constraints so that it can soar,’ even though CSAM generation is not a good business case for innovation, or entrepreneurship.”

Pfefferkorn recommended a concise bill like the REPORT Act, signed into law in May 2024, which increased the amount of time companies were required to preserve reports of CSAM made to NCMEC. Research by Pfefferkorn and her colleagues at Stanford showed law enforcement struggled to pursue cases from NCMEC’s CyberTipline because of a short, mandated retention period for reports.

“Now that the law requires keeping it for a full year, that is something that, based on our research, would have an actual measurable beneficial impact on the ability to fight and investigate CSAM,” Pfefferkorn says.

The REPORT Act passed the Senate by unanimous consent and passed the House with a voice vote.

Authors

Jasmine Mithani
Jasmine Mithani is a journalist focused on making complex ideas accessible to everyone. She has worked as a visual journalist, game developer, civic tech software consultant, alt-weekly editor and user experience designer. Her experience in journalism spans outlets national to hyper-local, including...

Related

Analysis
How Free Speech Coalition v. Paxton Will Change Tech PolicyJuly 10, 2025
News
As Deepfake Bans Take Effect, Child Offenders Remain a Stumbling BlockJune 24, 2025

Topics