AI is Supercharging the Child Sex Abuse Crisis. Companies Need to Act.
Sarah Gardner / Apr 29, 2024Sarah Gardner has worked in the child safety space for more than a decade. She is the CEO and Founder of Heat Initiative, an organization advocating for big tech companies to stop the spread of CSAM on the internet.
For more than a decade, I’ve worked to end the distribution of images and videos online depicting horrific child sexual abuse. Far too often, survivors and advocates have been met with resistance – from technology companies and legislators alike.
AI threatens to supercharge this crisis, making it possible to generate sexual content of anyone, including children, with the click of a button. Reports of AI-generated nude images of middle and high school students continue to pop up across the country, and celebrities have been targeted by apps that can “undress” a photo.
And there is little being done right now to stop it. Some have suggested implementing measures and guardrails around AI models so that they will not be able to create sexually explicit images.
These measures could curb the potential explosion of child sexual abuse and AI, but they don’t address the underlying issue of how technology enables a culture of sexual exploitation of children.
Earlier this month, the National Center for Missing & Exploited Children released a new report on the distribution of CSAM online. The report found a rise of more than 12% in reports of CSAM online in 2023 – with a total of more than 36 million reports – and an increase in the use of generative AI to create this type of content.
You would expect to see great leaps on all tech platforms. However Apple had just 267 reports.
The tech giant is simply not detecting known child sexual abuse that is being shared via iCloud. That’s not because this abusive content isn’t being distributed on iCloud – a former Apple executive admitted the platform is being used to distribute sexual images of children – but because the company is purposefully turning a blind eye.
That was true until August 2021, when Apple announced they would roll out a cutting-edge, privacy forward solution to detect known child sexual abuse images and videos in iCloud. The solution was similar to the spam filters they were already using to protect users and required detection of at least 30 images before the system triggered. The community of activists, allies and survivors working on this issue were thrilled at this new level of protection for kids.
And then – Apple quietly changed course. A small contingent of “privacy at all cost” diehards complained, and Apple pulled back the technology. Let me be clear – these privacy hardliners are claiming that illegal photos and videos of a child being raped and abused is someone’s private data. I fundamentally disagree with this assertion.
Therefore, without doing any of the basics of online child protection thus far, we have major questions for Apple about what will happen when the company integrates AI models into their products and they are forced into doing content moderation. Will it continue to stand by and ignore the proliferation of illegal and vile content made possible and protected by its technology? Apple’s long time negligence in the child sexual abuse space is coming home to roost. Will it finally have to do something?
All of this asks us to take a step back and confront the underlying problem: images and videos of sexual abuse and nude photos of minors continue to circulate widely on the Internet. We know our country’s existing infrastructure to find, detect, and save child victims is already stretched beyond its limits. It is practically a given that the system will be entirely overwhelmed by millions of AI-generated child sexual abuse images flooding our detection system. So let’s focus our blame and attention on the companies that deploy technologies with inadequate or nonexistent safety features.
The training data for many AI models included child sexual abuse content because it is readily available on the internet. Now the creators of those models must fix them and put reasonable filters and safeguards in place that could detect and suppress these images.
I can’t reiterate strongly enough that companies need to act now. Don’t wait for Congress to tell you what to do, prioritize addressing this issue before it is too late. We’ve only seen the tip of the iceberg with the AI images of A-list actresses and middle and high schoolers. Finally listen to the stories of survivors whose pleas have been ignored for years. It’s time to act.