Home

Donate
Perspective

The Grok Disaster Isn't An Anomaly. It Follows Warnings That Were Ignored.

Bruna Santos, shirin anlen / Jan 9, 2026

The AI app Grok on the App Store on an iPhone, against a backdrop of search results displayed on the social media platform X (formerly Twitter) on a laptop, in London. Picture date: Thursday January 8, 2026. (Photo by Yui Mok/PA Images via Getty Images)

The recent revelations about Elon Musk’s Grok chatbot generating and publishing nonconsensual, sexualized images of women and children in response to user prompts on X are being treated as a scandal. They should instead be understood as the most severe phenomenon yet in a disaster that started years ago.

According to WIRED, separate from the images posted on X, a cache of around 1,200 links to outputs created on the Grok app or website is currently available, and some of these have already been shared on adult deepfake forums or indexed by Google. These images, WIRED reports, are “disturbing sexual videos that are vastly more explicit than images created by Grok on X.” 404Media reports that on Telegram, users are repeatedly jailbreaking Grok to produce “far worse.”

This is what happens when gender-based abuse becomes scalable: long-standing forms of harm are not only amplified by generative systems that lack meaningful controls over what users can prompt and produce, but also rapidly migrate across platforms–moving from private tools to public forums, from fringe channels to mainstream search results—making the abuse harder to contain, remove or remediate once it spreads.

For years, WITNESS, a global organization focused on using emerging technologies to defend and protect human rights, has warned that synthetic media would not only challenge truth, but would be weaponized against people—especially women, non-binary communities, and children. What we are seeing with Grok—both on X and off it—is not a technical glitch or an edge case. It is the latest iteration of a long-standing harm: nonconsensual intimate imagery (NCII), now turbocharged by generative AI and unleashed by a company with no apparent qualms about the impact.

Governments across the UK, EU, India, France, and Malaysia have launched investigations or issued demands for information following reports that Grok could digitally “undress” women and even underage individuals and post the images directly to X. The goal is straightforward: remove the content, suspend the accounts responsible, and compel platforms to cooperate with law enforcement. What’s alarming is how difficult those basic steps have become in an AI-mediated ecosystem. When volume and speed overwhelm technical infrastructure, they don’t just expose enforcement gaps; they reveal the ideology behind the design. Scale matters more than safety, experimentation more than privacy and consent, and that harm is acceptable collateral in the pursuit of growth, visibility, and power. That ideology became even clearer when, following public backlash, X restricted Grok’s capabilities not by meaningfully strengthening safeguards, but by limiting access to paying users. This decision did not eliminate the harm; it monetized the risk. It sent a chilling signal: if you pay, you can still generate abusive content.

Grok, like OpenAI’s Sora before it, exposes a familiar but amplified pattern: the complete absence of consent through synthetic media. Sora’s early flood of videos depicting women being strangled and brutalized showed how quickly “experimentation” can slide into normalized sexual and gender based violence when safeguards fail. Grok’s so-called “spicy mode” follows the same trajectory. These systems do not invent misogyny; they surface and accelerate it. Women, girls, and non-binary people have long been targeted online through sexualized harassment designed to silence voices, undermine credibility, and reinforce misogynistic narratives. Generative AI makes this abuse easier, faster, and more anonymous: stripping agency from those depicted while normalizing the harm itself. (In perhaps the most egregious example to date, Mother Jones reported that an image of Renee Nicole Good, the Minnesota mother gunned down by an ICE agent on Wednesday, was ‘undressed’ by the app following her death.)

Although Grok’s terms of service prohibit sexual content involving minors and the use of real people’s likenesses, reporting shows the system generated nude videos of public figures without being specifically prompted to do so, and allegedly failed to classify the nudification of underage individuals as illegal content. For children, the harm is severe. Synthetic sexual imagery fuels cyberbullying, coercion, and exploitation at a scale and speed we have never seen before, and once such content exists, it is nearly impossible to contain. Synthetic abuse is not a simulation of harm; it is harm, enabled by systems that failed to intervene at the point of creation. Treating such material as less serious because it is “synthetic” denies the lived reality of those harmed and dangerously erodes protections for the most vulnerable.

As WITNESS has long argued, highly personalized and realistic generative systems risk collapsing the boundary between irony, entertainment, and abuse. When platforms prioritize engagement and rapid deployment over safety-by-design, they legitimize practices that were once fringe and accelerate them into the mainstream. Our work has consistently shown that harassment and abuse thrive in the gaps between platforms, policies, and enforcement. Grok makes those gaps impossible to ignore.

As The Atlantic noted, this kind of content typically remains hidden in private chatbot interactions: out of public view, but no less real. When AI tools are integrated directly into social platforms like X, spillover is inevitable. What begins as a “private prompt” becomes public sludge: synthetic content that circulates and spills onto other platforms without provenance, accountability, or meaningful recourse for those harmed.

What makes this moment especially bleak is how hollow platform accountability has become in light of the escalation of already existing harms through synthetic media. An earlier statement from Grok claiming it had “identified lapses in safeguards” and was “urgently fixing them” was reportedly generated by the chatbot in response to a user prompt. It was unclear whether any concrete action followed, and the chatbot later issued other contradictory statements, proving once again that it had no real understanding of the context. When AI systems recklessly generate both the harm and the apology, governance turns into theater. Protection becomes performative.

This is why focusing solely on takedowns misses the point. The harm occurs at creation. Detection, provenance, and safeguards must operate before an image exists, especially when the subject is a non-public individual with little power to push back. Treating “adults like adults” cannot mean abandoning safety-by-design, particularly when adult content systems repeatedly fail children and marginalized communities. Companies must also establish clearer ethical guidelines governing user exposure to adult content, including stronger and enforceable restrictions on CSAM-oriented material, whether created or manipulated by AI tools. 

AI developers, tech companies, social media platforms, and regulators must treat nonconsensual sexualized imagery as a design-level risk, not a downstream moderation problem. This includes addressing the presence of NCII and sexualized abuse material in training datasets (content that embeds harm into systems before they are ever deployed). Platforms must block the creation of nudified and sexualized content involving real people, especially children, by default; invest in detection, provenance, and reporting systems that work for non-public figures; and center consent and child protection at every stage of model development and deployment.

We are living in a fragmented audiovisual reality, where abusive content is generated every minute and platforms issue protection statements that may be as automated as the systems they claim to regulate. In that reality, protecting truth is inseparable from protecting people. Grok is not an outlier. If we continue to treat NCII and AI-enabled sexual abuse as secondary to innovation, we will keep rediscovering the same harm- only louder, faster, and harder to undo.

Authors

Bruna Santos
Bruna Santos, Policy and Advocacy Manager at WITNESS and Member of the Coalizão Direitos na Rede. Bruna has long term experience on Internet governance, platform liability, AI governance and technology regulation. She was previously a German Chancellor Fellow with the Alexander von Humboldt Foundati...
shirin anlen
shirin anlen is an award-winning creative technologist and AI expert specializing in deepfakes and ethical technology. She is the AI Research Technologist and Impact Manager at WITNESS, where she leads research and strategy on AI detection and the human rights implications of audiovisual AI in high-...

Related

Perspective
Why Musk is Culpable in Grok's Undressing DisasterJanuary 7, 2026
Analysis
Tracking Regulator Responses to the Grok 'Undressing' ControversyJanuary 6, 2026
Podcast
The Policy Implications of Grok's 'Mass Digital Undressing Spree'January 4, 2026

Topics