Home

Donate
Podcast

The Policy Implications of Grok's 'Mass Digital Undressing Spree'

Justin Hendrix / Jan 4, 2026

Audio of this conversation is available via your favorite podcast service.

In what Reuters called a "mass digital undressing spree,” Elon Musk is provoking outrage after his Grok chatbot responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X.

In response to another user prompt, on December 28, the Grok X account posted an 'apology' for one such incident:

I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues.

Whether the company is actually taking the matter seriously is an open question; a response to a Reuters inquiry to xAI was met with the auto-reply "Legacy Media Lies," while Elon Musk has reportedly posted laugh-cry emojis in response to some of the images generated.

To discuss this latest controversy and the broader policy implications of generative AI with regard to child sexual abuse material and nonconsensual intimate imagery, I spoke to Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered AI and author of numerous reports and articles on these subjects, including for Tech Policy Press.

What follows is a lightly edited transcript of the discussion.

Riana Pfefferkorn:

My name is Riana Pfefferkorn. I'm a policy fellow at the Stanford Institute for Human-Centered AI.

Justin Hendrix:

Riana, I appreciate you so much speaking to me on what are the last waning days of holiday vacation for most folks, but this is an important issue and one that I saw you commenting on, on social media. Very grateful that you took the time to talk to me for the podcast. And of course, we're talking about this crisis, I suppose, certainly for the victims, that has emerged over the last couple of days with regard to xAI's chatbot Grok, which is being used to non-consensually undress people based on images submitted by users, based on images submitted by users. Reuters has called it a “mass digital undressing spree.” What do you make of this latest controversy for Elon Musk and his chatbot?

Riana Pfefferkorn:

I mean, it really demonstrates that for people who work in content moderation, the holidays are just another day really in some ways. Although one of the issues here seems to be the perception that there has been recalcitrance or even non-responsiveness in terms of doing anything about the imagery of adults and children alike that has been cropping up on Grok in the last few days.

Justin Hendrix:

The Reuters journalists analyzed a single 10-minute window on X where they tallied 102 attempts by ex-users to use Grok to digitally edit photographs of people so they would appear to be wearing bikinis, often targeting young women. One prompt explicitly asked the AI to put her into a very transparent mini bikini. Reuters reports that Grok complied to about one in five requests. Have you been paying attention to this reporting or seeing any of this yourself on X?

Riana Pfefferkorn:

So I haven't been using X in the last couple of years. I'm more of a Bluesky girly at this point in time, but I've certainly seen a lot of discussion about this issue that has been happening on Grok. And one of the things that I'll admit I find a little bit surprising is that people are using their accounts to interact with Grok and have their own usernames attached to these requests that they're making. I don't know why they would think that, that would be a good idea to put that on the record as requesting this kind of material. And it demonstrates, I think, some of the pitfalls of the ‘Spicy Mode’ that Grok had allowed in recent months, in part because, frankly, the not safe for work use case for generative AI is one of the major use cases for it. There is a lot of supply and demand, demand side for being able to make explicit content.

In and of itself, that's not a problem. It's a problem when it becomes non-consensual deep fake pornography of adults, and it's a problem when it becomes child erotica or CSAM, child sex abuse material, previously known as child pornography of minors that is made using these AI tools. And honestly, I think that this is just going to be a more and more bigger headache for basically any company that allows generative AI is to try and navigate that tightrope between having a real call amongst their user base to allow them to create not safe for work content, while also facing up to the fact that we have seen a growing problem that has reached proportions to the degree where even Congress passed a law in 2025 called the TAKE IT DOWN Act, to deal with non-consensual deep fake pornography of adults and minors alike, which has emerged as perhaps one of the most widespread and most just disgusting abuses of generative AI that we've seen to date.

Justin Hendrix:

I do want to get into regulatory responses, potential legal responses for victims, but just staying with this theme of the use of these systems for this purpose. We've seen OpenAI announce that it's going to get into the erotica content business, the erotic chat business very explicitly. The Atlantic's Matteo Wong reported that an update to Grok's system prompt last fall explicitly stated there are no restrictions on fictional adult sexual content with dark or violent themes.

There's this issue we see again and again around companies appearing to want to draw a thin line in terms of what's acceptable with regard to minors and content that may target minors. This idea of words like teenage or girl, not necessarily implying that someone is underage. The chatbots should err on the side of permissiveness. We saw this again, I think, in the Reuters reporting from Jeff Horwitz on Meta last year as well. This seems to be just the direction of things.

Riana Pfefferkorn:

Yeah. And I think that we can draw something of a line between chatbot interactions where it's text only, and that poses one set of issues. And then a much different set of issues where we're talking about the generation of imagery or even video, where using a computer to manipulate an image of a real identifiable child into CSAM has been a federal crime for 30 years at this point. And it's now a federal crime and has been state-by-state crimes under what used to be called revenge porn laws to create non-consensual deep fake pornography of adults, primarily of women, especially female celebrities. That was one of the earliest applications for the more rudimentary deep fake technology to have come out of a subreddit back around 2017. This is what this technology has always been used for.

And so I think we can talk about the different headaches and the different implications and questions that arise with regards to chatbots as distinguished from imagery. But with imagery, I think there's a much more clear cut risk of liability when it comes to the company itself. Probably a lot of tech policy press listeners are pretty familiar with Section 230. Section 230 does not immunize companies with respect to violations of federal criminal laws. And very explicitly, Section 230's carve out expressly says, calling out the portion of Title 18, which is the criminal section of the US code that pertains to what are still called the child pornography statutes in the US code. So the liability question becomes, not to mention the public optics and public relations questions, become much more acute when we're talking about imagery of children and now increasingly thanks to more recent changes in the law pertaining to imagery of adults with respect to imagery of adults as well.

Justin Hendrix:

It doesn't appear that X or Musk are taking this very seriously. And we've seen responses to the press with the auto reply “legacy media lies” in terms in response to queries on this issue. Musk even posted, according to Reuters, laugh-cry emojis in response to edits of people in the bikinis. I mean, what do you make of the response so far we've seen from X? How could that potentially land them in hot water?

Riana Pfefferkorn:

It's certainly not a new problem. I spoke to Business Insider a few months ago for a story that they had done where one of their reporters, Grace Kay, had talked to about a dozen people who work at xAI, who are training the chatbot, who had seen a lot of instances of users requesting AI generated CSAM there. And so it has been something that has been brewing for months at minimum, if not longer than that. And I will say that I think it is usually incumbent upon most companies to ensure that their public facing communications demonstrate that they are taking this issue seriously rather than to be seen as making light of it, whether that's with respect to adult imagery or with respect to imagery of children.

And so it's, I think, a questionable response to be seen as making light of the people who are real victims here, where we're not necessarily talking about imagery of people who do not exist, which raises, again, yet another set of policy issues and trade-offs and whatnot. We're talking about imagery of real people, young celebrities. I saw some discussion about the young star of the latest season of Stranger Things, which I've been watching over the holiday break. And it's just heartbreaking to see that it seems like year after year, more and more people get fed into the law of this kind of online depravity, frankly, and shouldn't have to deal with that.

As the TAKE IT DOWN Act comes into force coming up in May when Congress, to back up a minute, passed this law last May, May of 2025, one of the provisions that gives the law its name, its acronym, if you will, is the portion that says that when people who are the victims of both non-consensual, either real nude imagery or deep fake imagery report that to platforms, the platforms must take it down very promptly within 48 hours.

Now, Congress gave platforms a year to basically build up the infrastructure to be able to do that compliance for that take down mandate that will come into effect in May. And so by and large, I would imagine that across Silicon Valley and anywhere else that electronic service providers, to use the terminology in the law, are preparing to comply with this, that they are either building into their workflows for taking down and reporting CSAM that gets reported to them or that they detect their own systems or building some other workflows out for handling reports from people who are depicted in these images, of which a lot of online platforms are already voluntary members of other initiatives that predate the TAKE IT DOWN Act for having your imagery taken down if you are the person depicted in it and don't want it online.

So to date, it kind of seems like xAI may be an outlier, at least among the larger and more prominent sorts of platforms that we see. We have seen an explosion in the popularity of so-called nudify or undress apps where there are apps that are largely built on an open source image generation model, usually Stable Diffusion 1.5 that just sort of do this wraparound so that they can allow somebody to upload a single clothed image of somebody and that it will then return an unclothed image of them.

And a lot of them, either in their terms of service, in their actual functionality or both, do not adequately prevent against the upload of images of minors. But that exists as kind of more of this seedy underbelly that has its own economy, brings in millions of dollars according to an indicator media research effort, but that you might think of those as standing in contrast to the large, generally more corporate risk averse platforms out there that tend to be, I think, by and large seen as good faith actors that will comply with their reporting requirements with regard to CSAM and that are probably busily getting ready to comply with TAKE IT DOWN. And so this really sort of stands out like a sore thumb, I would say, by contrast.

Justin Hendrix:

One good report on this incident, I think, that provided a lot of, I think good background, good context came from Kat Tenbarge at Spitfire News, and she pointed to conversations she'd had, for instance, with Mary Anne Franks, other experts pointing out that even with TAKE IT DOWN in effect, that there's a lot of obstacles to potentially seeing effectively justice served. That there may be cultural or even legal barriers, certainly process barriers. I think you're pointing to that even in talking about the idea that the platforms have to prepare, systems have to be put in place, all sorts of things will have to happen to make that law effectively work. Are you optimistic that we'll take a bite out of this issue in 2026?

Riana Pfefferkorn:

In some ways more than others, maybe. Definitely with regard to the apparatus coming online for having imagery taken down. Now, I share concerns with a lot of other people who work on tech policy that the take-down apparatus will be abused to take down material that's not non-consensual pornography. I mean, President Trump said out loud when he was signing the law that he plans to use it to get unfavorable material about himself taken offline, and he did not cabin that remark to pornographic or explicit material about him.

And so I do think it will be incumbent upon platforms to add to their transparency reports, what their “TAKE IT DOWN” process is looking like, how much are they receiving of those take down requests, how many of they're complying with? Are there any that they're not complying with and why? And at this point, I think a lot of people would agree that we're probably past the zenith for transparency reporting being something that was a big priority for online platforms.

But I do want to call on platforms to say, if you're going to be reporting about requests for user data, if you're going to be reporting your numbers that you report to NCMEC of CSAM reports, you should also be doing that for TAKE IT DOWN as well, so that the public can evaluate how useful this new law is in terms of combating this particular problem. And of course, at this point, there are laws at the state level. Again, by and large, as you mentioned, this targets the end user, the person who is actually engaging in the publication or threatened publication of this kind of material. And of course, there are always, I think, barriers to people being able to seek justice in court, whether federal or state court from the people who have done that to them specifically.

I like to think that the disastrous PR and the legal liability with regard to AI generated CSAM in particular, not to mention all of the material depicting adults will lead companies to take this question more seriously. I do want to flag though that it's not necessarily a slam dunk for platforms who offer AI image generators to be able to fully prohibit and prevent this material from happening in the first place.

I mentioned earlier that there is a big demand for not safe for work content. And so there's this difficulty to try and say, "How can we make this part of our business use case without straying into legal liability or fully illegal territory for the kind of material that comes out the other end?" But we also know that even when a model is not explicitly prompted for this kind of explicit material, it may generate it unintentionally. That's also been an issue that AI model developers have had to contend with.

And one of the other issues with respect to liability is the ability to try and red team your models to see if they are capable of producing AI CSAM in the first place. There's legal risk even to trying to prompt a model because technically you are requesting the production of child pornography and then possessing child pornography, again, to use the language that's still on the books. And so when I released a research report with colleagues of mine at Stanford last year on this precise topic, we found that there is a lot of fear of legal risk exposure, both by individual employees who don't want to go to prison for doing their jobs and by companies who are both like legally risk averse and, as I mentioned, typically are PR risk averse about red teaming their models for their capacity to generate AI CSAM.

And one of the things that, that report mentioned as a policy recommendation is that it would be preferable to have some sort of legal pathway to better enable the people developing these models, companies developing these models to better safeguard them against their AI CSAM generation capacity because it can be created even indirectly. And so it would be great to see a legal pathway built so that companies can bolster their ability to safeguard their products in the first place so that less of this downstream behavior is possible on the output end.

Justin Hendrix:

And can you kind of get a little into the legal depths of this a little? I mean, why does the kind of computer generated image defense sort of fail for folks that are generating this material when they are held liable in a court?

Riana Pfefferkorn:

So I'll focus on talking about imagery of minors. Many people are familiar with a Supreme Court case from just around the turn of this century that found that fully virtual CSAM is First Amendment protected, at least as long as it's not obscene. If it also qualifies as obscenity, then it can be and has been prosecuted under a federal law pertaining to obscene and sexually explicit imagery of children. I've seen the Department of Justice use that in at least half a dozen different cases where they are prosecuting fully virtual AI CSAM that doesn't depict a real child.

With respect to imagery depicting a real identifiable child, that portion of the federal definition of what constitutes CSAM was not implicated in that Supreme Court case. It only dealt with fully virtual imagery that wasn't depicting real children. But if you go and you look at the courts of appeals, the courts of appeals to have considered the question have found by and large that there is no First Amendment protection the way that there is for fully virtual imagery of children where you're depicting a real identifiable kid. The rationale being there that a lot of the harms that come out of actual sexual abuse depictions are also attendant upon what is called morphing an image of a child into what is popularly called morphed image CSAM.

So even if there's no hands-on abuse, there nevertheless is psychological and emotional trauma. It's an invasion of privacy. When we spoke to victims of nudified images for our research paper last year, we found that this does tend to have a longstanding impact. People will end up missing school, which affects their grades. They stop participating in the activities that they used to do. And one of the common refrains is a fear that this is going to follow them throughout their lives. That when they are trying to apply for college, when they're trying to apply for jobs, when they're trying to strike up new friendships or relationships, anybody looking them up will find this imagery and won't necessarily know whether it's real or whether it's fake.

And I think that's certainly something that, going back to your question about, am I optimistic about this being something that we can fight in 2026, notwithstanding the potential for abuse of the TAKE IT DOWN requirements, it is my hope that this will help to assuage that fear on the part of victims that they can have an avenue for the prompt take down of this material if it crops up. But basically, that gets back to the reason that courts of appeals have said, "No, unlike that fully virtual imagery, when it's a depiction of a real identifiable child, you don't get to claim the First Amendment as a defense basically." It's because it is closer conceptually to the actual real harms done to children who are actually abused in actual CSAM, which was the Supreme Court's rationale back in the '80s for finding that CSAM falls outside the First Amendment protection altogether.

Justin Hendrix:

We'll see, I suppose, how the platforms put this infrastructure in place. And remember when Meta made its big announcement last January about changes it was making to its content moderation policies, it did say that this type of concern would remain a priority. Harder to tell with Musk and X if any of the infrastructure will be in place after he's fired so much of that apparatus on that platform and dissolved the Trust and Safety Council and all of those things.

Looks like regulators abroad are already acting a little more aggressively. We've seen French ministers who've reported this content to prosecutors already. UK's Ofcom has kind of made statements to the press about the problem of intimate image abuse and flag that it could lead to prosecution. I saw a report that Indian officials have threatened legal action against X and its compliance officers and have issued an order directing Musk to take corrective action on Grok. And of course, we've seen the European Commission just recently fined X 120 million euros for other violations. Do you see this sort of international legal pressures bearing down on this problem potentially having more effect?

Riana Pfefferkorn:

Yeah. I mean, it's a complicated question because we're also seeing this merging between corporation and state with regards to the current administration where Silicon Valley-based interests have had a lot of leverage to try and push the federal government into pushing the European governments into backing off from their own regulatory frameworks. And they've been pretty unabashed about doing that. That's certainly not limited only to Elon Musk. We've seen Mark Zuckerberg doing that as well and by and large just trying to say, "Okay, you've got the Digital Services Act, you've got the AI Act that the European Union have now had passed and come into force trying to back them off of that."

It may be more of a persuasive effect to call back, if you recall a couple of summers ago when Pavel Durov, the head of Telegram was arrested by French authorities. It was a little bit unclear at the time exactly what was going on, but it seemed to have something to do with the prolific nature of child abuse imagery on Telegram. And I think that may have changed the calculus for a number of executives at tech platforms to think, "Okay, is this now a no-go zone to be able to go to France at all?"

Similarly with India, India has long had a set of guidelines under their telecommunications ministry that frankly allows the arrest of people who work for online platforms in country. I refer to these basically as hostage taking laws that whatever it might be that displeases the Indian government, you have to have somebody who is basically responsible in country, not just operating an office fully remotely from Singapore or whatever for the entirety of Asia, and that person can then be rounded up and held responsible.

If you remember a couple of years ago, in fact, police showed up at one of Twitter's offices in India. In this case, I don't think it was about child abuse imagery, but it goes to demonstrate that any individual human assets that you have on the ground in the country, whether you're Pavel Durov or whether you're just some poor schmuck who happens to be the officer for an IT Act, they may be personally at risk. And we've seen that in other situations and places like Brazil over the years as well.

When it comes to leverage by the US government, I think one of the other complicating factors here is that normally, we might expect to see ... We have laws on the books, we have federal laws. This is a federal crime. There is not coverage from Section 230. This is something where potentially we might see companies be worried about whether the Department of Justice is going to come after them for hosting CSAM knowingly and not doing anything about it on their services.

Historically, that particular part of criminal law has not really been enforced against companies, whether because they're doing an okay job or whether there are other reasons for that. I think we're even more at risk for that now because frankly, we've seen reporting in recent months that because of the war on immigrants, so many investigators and prosecutors who have spent a lot of their careers fighting against child sexual abuse and exploitation have been removed off of child safety related investigations and prosecutions and re-tasked onto going after gardeners and nannies.

And there has been a direct drop in the responsiveness that online platforms are now seeing in terms of follow-up from law enforcement after they report CSAM on their platforms through the National Center for Missing Exploited Children, and then who then reports those on to law enforcement. They aren't seeing as much follow up anymore on those reports that they submit and that, that is directly tied into the reallocation of investigators and prosecutors away from their primary jobs.

And so again, even if you set aside the close links between Elon Musk and the Trump administration, there may just not be enough people minding the store to be able necessarily to take action here, whether it is a large prominent company like xAI, whether it's the sorts of nudify apps that I was mentioning earlier, much less the rank and file of individual users who are out there flagrantly asking for this on Grok with their handles attached to it. It may just end up being a resource question where it was always the case that a very, very tiny percentage of CSAM reports ever got actually led to arrests and prosecutions and where, as I said, the federal law governing reporting obligations and removal obligations for CSAM by platforms was not really being enforced against platforms, at least not in any public facing way. And that may be exacerbated now because of the de-tasking of child safety professionals within the federal government away from their jobs.

Justin Hendrix:

Of course, I suppose that problem is just as bad also for non-consensual intimate imagery and that in both of these circumstances, both these phenomena, the issue is just such an extraordinary explosion of volume that's enabled by these technologies. It seems like an extraordinary avalanche that we'll deal with now for years ahead.

Riana Pfefferkorn:

I think that's right. And it also highlights the difference between detection and prevention for known instances of CSAM or of non-consensual intimate imagery or NCII for short where we have very good filters such as photo DNA for detecting known imagery and preventing it from ever being posted. One of the challenges with AI CSAM and with AI imagery of adults is that it's new. It's infinite permutations that can be rapidly generated of a number of individual human victims all within a very quick timeframe.

And one of the challenges for anybody trying to build tooling, I think for that, is that now you're trying to do detection of never before seen material. And while in doing my research, I've talked to people at platforms who say that they're pretty happy by and large with their classifiers and detectors for detecting previously unseen material. Nevertheless, it is a totally separate class of challenge, I think, from the ability to just turn on photo DNA, photo DNA go bur, and it can detect all of the imagery that is known that has been floating around on the internet for decades.

Justin Hendrix:

Last question for you, Riana. If you were talking to a victim perhaps of this new Grok capability or another nudifier app, what would you tell them?

Riana Pfefferkorn:

Ooh, that is a difficult thing. I mean, it's such a large problem that requires so many different parts of society to fight. And after the fact, after something has already happened, then the options I think are much more narrow than when we're talking about before the fact in terms of prevention. I mean, I would at least tell them that they can try and look into what their state laws are. They can try and see whether, depending on what platform they see this appearing on, are they part of the existing voluntary initiatives that I mentioned for the removal of either CSAM where it's of an underage child or of NCII where it's of an adult that it may be that, that platform already participates in one of those voluntary initiatives and that come a few more months from now, which is going to be cold comfort to anybody suffering from a problem right here and now, there will be requirements to have this kind of take down mechanism in place.

One of the things I think has changed in the time that I've been studying this issue is that while as I mentioned we've had federal law against morphed imagery of children for decades now, it's only recently with the advent of generative AI tools that states have realized that a lot of them didn't ban morphed images in their CSAM laws and have rushed to try and close that gap. And so now there may now be potential remedies or something that you can do available under state law that may not have been there up until more recently. And then as you mentioned, we've also seen the advent of more and more state level laws against NCII of adults in recent years.

And so I think I would try and tell them to look into what is possible at both the state and the federal level. And if nothing else, we've seen people try and get their imagery taken offline by hook or by crook, even if it's by filing DMCA take-downs when that wasn't what the DMCA was for. People would use whatever mechanism was available to them. And I think also just maybe not to be ashamed and not to allow yourself to be stigmatized. We've seen this be used as a way either to bully younger people, as a way to try and shame public figures, politicians, female politicians out of office.

And I think one of the reasons that we've seen so many more laws come into effect over recent years is in part because of victims and victims' parents where they are teenagers who said, "We're not going to sit down and just suffer this as a shameful thing in silence. We're going to stand up and go to our state house and talk to our legislator and make them pass the law so that this doesn't happen to more people." And I think that's also just an important attitude to bring to bear that it is not your fault and that you do not need to be ashamed because somebody has done this to you.

Justin Hendrix:

Riana, let's make a date to check in again after TAKE IT DOWN is in effect. Let's see what we learn from the implementation of that law and perhaps as we see some of these regulators around the world take action in this case or in others.

Riana Pfefferkorn:

Thank you, Justin. It's a date.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Analysis
A “Victory for Survivors” or “Bittersweet News”—Experts React to Passage of the TAKE IT DOWN ActMay 1, 2025
Perspective
Chatbot Grok Doesn’t Glitch—It Reflects XJuly 28, 2025

Topics