Home

Donate

Court Rules That Constitution Protects Private Possession of AI-Generated CSAM

Riana Pfefferkorn / Mar 20, 2025

The Robert W. Kastenmeier US Courthouse in Madison, Wisconsin. Credit: Carol M. Highsmith Photography. Source

A US district court opinion from last month has the potential to shape federal obscenity jurisprudence in the age of AI-generated child sex abuse material (CSAM). In reiterating the constitutional right to private possession of obscene material, the decision is a timely reminder of the limits the First Amendment places on government efforts to punish speech that is harmful to children. However, by allowing a defendant’s prosecution to go forward on other charges, the court’s opinion shows that the government has enough tools to bring accused offenders to justice for AI-enabled child sexual exploitation and abuse.

Last February, I published a paper with Lawfare analyzing the legal and policy aspects of AI-generated CSAM. As my paper explains, obscenity and CSAM are two distinct categories of unprotected speech. Now that generative AI can produce highly realistic imagery, I predicted that federal prosecutors would start relying more on a historically little-used law, the federal child obscenity statute, 18 U.S.C. § 1466A. Unlike federal CSAM laws, which apply only to material involving actual, identifiable minors, the child obscenity statute expressly does not require “that the minor depicted actually exist.” Prosecutors could thus avoid the potentially tricky problem of determining (and proving to a jury) whether a photorealistic image was of a real child or not.

Last May, a Wisconsin federal grand jury indicted a man, Steven Anderegg, for allegedly using Stable Diffusion to create obscene images of minors, then messaging them to a teenage boy on Instagram (prompting Meta to report his account to the CyberTipline). Anderegg was charged with three counts under Section 1466A—for production, distribution, and possession of child obscenity—as well as one count under a different law of transferring obscene material to a minor.

(There was no allegation that any of the images depicted real kids. That distinguishes Anderegg’s case from most of the other federal criminal cases involving AI-generated CSAM that I’ve seen so far, which have usually involved AI-modified images of actual kids.)

Anderegg moved to dismiss each of the four counts. In an opinion last month, the court largely rejected the motions. However, the court did dismiss the possession charge, holding that Section 1466A is unconstitutional as applied to Anderegg’s private possession of obscene “virtual” CSAM.

The Supreme Court has held that the First Amendment protects the right to possess obscene material in one’s own home, Stanley v. Georgia, 394 U.S. 557 (1969), so long as it’s not actual CSAM, Osborne v. Ohio, 495 U.S. 103 (1990). More recently, the Court held that the First Amendment also protects “virtual” CSAM that does not involve any actual children, Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002). Under that series of cases, Anderegg contended, the First Amendment protects his private possession of obscene AI-generated CSAM.

The court agreed with Anderegg, rejecting the government’s arguments to the contrary. The government asserted that the case was more like Osborne than Stanley, that Stanley is limited to obscene material depicting adults, and that Congress has compelling interests to ban possession of obscene “virtual” CSAM – for example, its potential use to groom children and the difficulty of distinguishing “virtual” CSAM from imagery involving real children. The court rejected these arguments as inconsistent with Free Speech Coalition, where, it noted, the government had unsuccessfully raised basically the same arguments (as I also discuss in my paper). Osborne, the court said, is not on point, as this case did not involve real children; rather, it’s more like Stanley, which “relies on the importance of freedom of thought and the sanctity of the home.”

Finally, the government attempted to distinguish Stanley because (unlike the state law at issue there) Section 1466A requires an interstate or foreign commerce jurisdictional hook, and Anderegg allegedly possessed the images on a foreign-made laptop. The court responded that this wasn’t a meaningful distinction, and that “[i]f the jurisdictional element were enough to overcome Stanley, Stanley would be a dead letter.”

That’s precisely what I said a year earlier: that “where CG-CSAM creators keep their material to themselves and never share it, they might be protected by the constitutional right to privately possess obscene matter,” and “there must be some limit” to Section 1466A’s jurisdictional hook, “otherwise, stretched far enough, the interstate-commerce hook would swallow the right to privately possess obscenity.” It is gratifying to see a court agree with my analysis and reiterate the continuing importance of Stanley.

It was not unexpected to see the government essentially try to relitigate its arguments that failed in Free Speech Coalition: at that time, Justice Thomas predicted that technological advances might one day require revisiting the ruling. Two decades later, with Justice Clarence Thomas the only justice from that decision who’s still on the Court, some observers believe that time has come thanks to the advent of AI tools for generating highly photorealistic imagery. To them, this court says, “not so fast.”

It’s not all good news for Anderegg. While the court agreed to dismiss the possession charge against Anderegg, it declined to extend Stanley to the production of obscene AI-CSAM. Stanley, the court said, was focused on possession, with no mention of production, and the Supreme Court hasn’t seen fit to recognize protection for production of obscenity in the intervening 55 years. In addition, the court declined to dismiss the Section 1466A distribution charge, as well as the charge for transferring the images to a minor.

If purely private possession of AI-CSAM is constitutionally protected under current caselaw but production is not, then using AI models (even locally-hosted ones) to generate child obscenity in one’s own home is not wholly insulated from criminal prosecution. Subsequently transmitting it to someone else, especially someone underage, is also grounds for liability. The court’s ruling shows that even with the Stanley limitation on possession charges, the laws on the books equip the government with sufficient options to prosecute AI-CSAM without offending the First Amendment – even where, as here, the material is wholly AI-generated and does not depict a real child.

Despite having avoided dismissal of three out of four counts, the government is appealing the adverse ruling on the possession count to the Seventh Circuit (where it’s been given case number 25-1354). To my knowledge, this will be the first criminal case involving generative AI, CSAM law, and the First Amendment to reach a federal appeals court. This will be one to watch.

Authors

Riana Pfefferkorn
Riana Pfefferkorn is a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and formerly a research scholar at the Stanford Internet Observatory (SIO). She investigates the US and other governments' policies and practices for forcing decryption and/or influencing...

Related

Inocencia en Juego: An Investigation into Groups Targeting Children on Facebook

Topics