Home

Donate
News

Regulators Are Going After Grok and X — Just Not Together

Ramsha Jahangir / Jan 26, 2026

The AI app Grok on the App Store on an iPhone, against a backdrop of search results displayed on the social media platform X (formerly Twitter) on a laptop, in London. Picture date: Thursday January 8, 2026. (Photo by Yui Mok/PA Images via Getty Images)

The European Commission on Monday opened formal proceedings against X under the Digital Services Act (DSA), deepening its scrutiny of the platform’s integration of the Grok AI chatbot amid growing international alarm over its misuse to produce child sexual abuse material and intimate deepfakes.

The Commission’s investigation will assess whether X fulfilled its legal duties to assess and mitigate risks stemming from Grok’s deployment in the EU, including the spread of manipulated sexually explicit content. The Commission cited “serious harm” caused by these risks, which it said appear to have materialized, and will also examine whether X submitted the required risk assessment prior to launch. The move expands existing proceedings into X’s recommender systems, which now reportedly rely on Grok.

The EU joins a growing number of jurisdictions launching or escalating formal action against X.

An urgent but fragmented response

Governments and regulators in at least eight countries have now confirmed action against X and xAI, Grok’s developer. But the legal frameworks and enforcement tools being applied differ widely. The responses so far reflect both increased urgency and the continued limits of cross-border supervision.

In the United Kingdom, Ofcom opened a formal investigation under the Online Safety Act on January 12, examining whether X complied with duties to prevent the spread of illegal content, including CSAM and non-consensual intimate imagery. The regulator said its investigation will assess Grok’s role in the creation and distribution of such content, as well as whether X conducted the required risk assessments before rolling out significant service changes. While X said it had introduced measures to prevent Grok from generating intimate imagery, the investigation remains ongoing.

Australia’s eSafety Commissioner has not launched formal proceedings but has sent a request for further information to X, asking for details on safeguards in place to prevent Grok’s misuse. eSafety noted that X had already been subject to transparency notices related to its handling of child sexual exploitation material and generative AI features.

In Canada, the federal Privacy Commissioner expanded its investigation into X to examine whether Grok was used to generate explicit deepfake images of individuals without consent. The inquiry, which now includes xAI, will assess whether either company obtained valid consent to use personal data and whether they complied with federal privacy law under Canada’s federal private-sector privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA).

In India, the Ministry of Electronics and Information Technology issued a warning to X after identifying what it described as serious failures in preventing the spread of sexually explicit content generated by Grok. X responded with an action-taken report, blocking over 3,500 pieces of content and deleting more than 600 accounts. However, government officials were reportedly dissatisfied, describing the response as insufficient and lacking specific action on the underlying policy failures. Officials said X risked losing legal protections if it failed to fully address the concerns.

Some countries have gone further by restricting access to Grok altogether. In Malaysia, authorities temporarily blocked Grok earlier this month following backlash over its ability to produce sexualized imagery. Access was restored last week after X introduced additional safety measures, according to the country’s communications regulator.

In Indonesia, reopening access is contingent on Grok meeting strict technical and legal requirements, including content moderation adjustments and formal registration as an electronic system provider. The Ministry of Communication and Digital (Komdigi) confirmed on January 14 that it had summoned representatives from X and held formal discussions. According to the Director General of Digital Space Supervision, Alexander Sabar, X committed to implementing improvements and content moderation adjustments in line with Indonesian law.

Brazil has given xAI 30 days to stop Grok from producing fake sexualized images or face administrative and legal consequences. The warning came in a joint statement from the national consumer protection agency, the data protection authority, and the federal prosecutors’ office. Brazilian regulators said xAI must develop systems to identify and remove harmful content and suspend accounts linked to its creation.

Enforcement without alignment

The Grok case has become an early test of whether existing regulatory structures can keep pace with the harms introduced by fast-deploying generative AI tools.

The Center for Countering Digital Hate (CCDH) found that Grok generated more than three million sexualized images in under two weeks, including over 23,000 that appeared to depict children. The scale and speed of this output have intensified pressure on regulators to respond — not only to the content itself, but to the broader question of how to govern a technology that crosses jurisdictions and moves faster than enforcement systems can follow.

The incident has raised difficult questions. What enforcement tools exist when a model is trained in one country, deployed globally, and causes harm in others? And what can regulators do when the platform responsible does not fall clearly within their legal reach?

The fragmented international response has placed renewed attention on coordination bodies like the Global Online Safety Regulators Network (GOSRN). Last week, the Network released a position paper laying out shared expectations for how online platforms should implement age assurance — one of the few regulatory tools currently being deployed to protect children from harmful content and experiences.

The paper sets out four core principles. Age assurance systems must be accurate and fair. They must respect users’ rights and comply with data protection laws. They must allow services to deliver age-appropriate experiences. And they must be enforceable. As the paper puts it, “rules are only as effective as their enforcement.”

Rather than advocating blanket age restrictions, the regulators frame age assurance as infrastructure that allows for risk-based protections. The goal is for platforms to be able to determine whether a user is a child or an adult, and apply safeguards accordingly. The systems must be proportionate, privacy-preserving, and technically robust.

The paper also signals a firmer stance on compliance. It explicitly warns against “regulatory forum shopping,” where companies exploit jurisdictional gaps, and affirms that enforcement must include consequences for services that fail to meet expectations.

Still, GOSRN is a forum for coordination, not enforcement. While its members share strategic goals, they operate under different legal regimes. A spokesperson for Ofcom, a GOSRN member, said cooperation among members continues but acknowledged that coordination is shaped by national frameworks and limitations on information sharing.

Australia’s eSafety Commissioner said it is engaging closely with other safety regulators and child protection bodies that have identified similar patterns involving Grok and other generative AI tools. However, the extent of that coordination remains unclear. When contacted, regulators did not disclose any details about the nature or scope of those engagements.

Some within the Network have previously raised the possibility that this could change. In an earlier interview with Tech Policy Press, the Korea Communications Standards Commission said the Network could evolve into a platform for leading joint investigations and enforcement actions. Any joint action would depend on the willingness and legal authority of member agencies to deepen collaboration and establish a framework for coordinated efforts, the regulator said at the time.

Ireland’s Coimisiún na Meán, which currently chairs the Global Online Safety Regulators Network, said the group “regularly engage[s] and co-operate[s] with regulators across the globe in order to enhance coherence and consistency.” But much of that cooperation remains informal, and the extent to which it translates into coordinated enforcement remains limited.

The Grok case highlights how far off that vision still is. Regulators remain aligned on principles but divided by legal systems, timelines, and enforcement capabilities. Shared standards have not yet translated into shared action, even when the platform and the risks are the same.

Authors

Ramsha Jahangir
Ramsha Jahangir is a Senior Editor at Tech Policy Press. Previously, she led Policy and Communications at the Global Network Initiative (GNI), which she now occasionally represents as a Senior Fellow on a range of issues related to human rights and tech policy. As an award-winning journalist and Tec...

Related

Analysis
Tracking Regulator Responses to the Grok 'Undressing' ControversyJanuary 6, 2026
Perspective
Grok is an Epistemic WeaponJanuary 13, 2026
Podcast
The Policy Implications of Grok's 'Mass Digital Undressing Spree'January 4, 2026

Topics