Home

Donate

Digital Exploitation: Mapping the Scope of Child Deepfake Incidents in the US

Katherine He, Adam Billen / Mar 28, 2025

A "Child Deepfake Incident Map" produced by Encode. Source

Over 300 million children worldwide have been victims of online sexual exploitation and abuse in the past year alone. Compounding this reality is a rapidly emerging new form of abuse: AI-generated “deepfake nudes.” According to Graphika, a company that analyzes social networks, millions of users visit more than 100 "nudify" sites online each month. With just a few clicks, AI can now generate disturbingly realistic nude images. Offenders no longer need explicit images to exploit their victims –– just one ordinary, fully clothed photo suffices. Perhaps predictably, the problem is metastasizing. In one survey of teachers and students in US schools conducted by the Center for Democracy and Technology, 15% of students and 11% of teachers report hearing about deepfake NCII of individuals associated with their school during the 2023-2024 school year. Nearly two years later, it’s safe to say that millions of students, many under 18, are witnessing and experiencing this form of online abuse firsthand.

In the year or two since this technology has proliferated, news reports around the country have surfaced incidents in US schools from New Jersey to Los Angeles. At Encode, we’ve been tracking these reports and uncovering a disturbingly consistent pattern of harm. It typically begins when a student is advertised a “nudify” or “clothoff” app or website on a platform like TikTok or Instagram. Recent reports indicate that approximately 90% of traffic to deepfake nudes app Crush, for example, originates from Meta platforms via explicit ads on Facebook and Instagram. The student then takes a screenshot of a classmate’s post from a social media site like Instagram and uploads that image onto a “nudify” website or application. Within seconds, they create a fake yet alarmingly realistic nude image and spread it across social media, through text messages, and over school networks. In some cases, students have targeted dozens of victims at once, generating deepfake images of up to 46 teen girls in a single incident. Victims often remain in the dark until the images are already widely spread online.

Yet, even as these cases make headlines, the true scope of this issue is likely far greater than what is publicly reported. Many victims are hesitant to come forward, fearing retaliation or further exposure. To bridge the gap between news coverage and the actual scale of deepfake abuse, our team at Encode recently created the Child Deepfake Incident Map: a digital tool designed to track and visualize reported cases across the US. The interactive map displays public incidents reported in the news and includes an anonymous reporting form, allowing victims to document basic geographical and demographic information without compromising their privacy. By making this data accessible, we aim to highlight the urgency of the issue and motivate policymakers, school officials, and tech platforms to take meaningful action.

This problem extends far beyond the US; it has already unfolded with devastating consequences in countries like South Korea. Coordinated efforts have uncovered over 500 cases of AI-enabled sexual abuse through anonymous reporting tools. Meanwhile, platforms like Telegram have also been identified as hubs for distributing non-consensual AI-generated explicit content. One such channel reportedly had over 220,000 members engaged in sharing these illicit images. Some victims have come forward to speak about their experiences, but most ask to remain anonymous to prevent further compromising their privacy. In these stories, we learn that one woman tried to take her own life after discovering explicit deepfake videos of herself; another halted her studies after developing post-traumatic stress disorder when nude images of her were shared among acquaintances; and a teacher lost her job after parents found a circulating deepfake image of her. Victims report not only the initial horror of realizing they have been targeted but also an ongoing sense of humiliation and distrust. Without meaningful intervention, more people, many of them children, will experience these violations and their lasting consequences.

Understanding the scale of the issue, however, is only the first step toward addressing it effectively. A comprehensive solution requires clear school policies that prioritize education and prevention, legislation that strengthens accountability and mandates action, and platforms that promptly remove harmful content that is reported. Many school administrators, unprepared for this emerging challenge, struggle with how to respond when incidents arise. Schools rarely have specific policies or educational programs in place to address the issue, and their initial response can sometimes exacerbate the situation. At Westfield High School in New Jersey, for instance, the victims’ names were publicly announced over the school intercom. To prevent further harm, schools should update their codes of conduct, sexual harassment, and cyberbullying policies to explicitly prohibit the creation and distribution of deepfake sexual imagery, treating such incidents with the same seriousness as real images. These updated policies should be clearly communicated through revisions to existing educational curricula on sexual education and student conduct.

Additionally, training for educators and administrators is crucial to ensure timely reporting, prevent delays, and protect student privacy. Schools should establish clear investigative procedures to determine appropriate consequences, coordinate law enforcement involvement when necessary, and ensure victims have access to support resources. Beyond schools, stronger laws are needed to close loopholes that allow AI-generated abuse to persist. The DEFIANCE Act and the TAKE IT DOWN Act, for example, would establish clear legal frameworks to effectively address these incidents.

Recognizing the magnitude of this issue and implementing concrete measures are essential to protecting children and preventing new forms of sexual abuse from compromising their dignity, privacy, and well-being. We have a window of opportunity for collective, decisive action to curb these harms before they become entrenched in our schools and societies.

Authors

Katherine He
Katherine He is a third-year student at Yale University and the Vice President of Global Chapter Programs at Encode. In her role, she directs grassroots legislative and advocacy initiatives in state and university chapters of Encode. Recently, Katherine organized a national campaign to prevent the p...
Adam Billen
Adam Billen serves as the Vice President of Public Policy at Encode, where he crafts and advances Encode’s policy priorities and builds bipartisan, cross-issue coalitions. His work has included passing the first-ever restrictions on AI's use in nuclear weapons operations in the FY25 NDAA, advancing ...

Related

Free Speech Advocates Express Concerns As TAKE IT DOWN Act Passes US Senate

Topics