Home

Donate
News

Privacy Regulators in 61 Countries Back Enforcement Against AI Deepfakes

Ramsha Jahangir / Feb 26, 2026

As investigations into AI-generated sexualized imagery unfold in at least eight countries, 61 data protection and privacy authorities across four continents have put AI image generation companies on notice, declaring that nonconsensual intimate imagery is a privacy violation, and regulators intend to act on it.

In the joint statement published this week, coordinated through the Global Privacy Assembly’s International Enforcement Cooperation Working Group, data protection and privacy authorities warned that “recent developments, particularly AI image and video generation integrated into widely accessible social media platforms, have enabled the creation of non-consensual intimate imagery, defamatory depictions, and other harmful content featuring real individuals.”

Authorities added: “We are especially concerned about potential harms to children and other vulnerable groups, such as cyber-bullying and/or exploitation,” describing such systems as posing “significant risks to individuals’ privacy and personal data” and capable of causing “serious harm.”

The declaration does not create a binding international enforcement mechanism. But it places AI-generated intimate imagery squarely within existing privacy mandates and signals that authorities already investigating generative AI systems are operating within a broader, coordinated frame.

The statement reminded organizations that AI content generation systems must comply with applicable data protection and privacy laws, noting that the creation of non-consensual intimate imagery “can constitute a criminal offence in many jurisdictions.”

The working group’s co-chairs — ODPA Guernsey, OPC Canada, SIC Colombia, PCPD Hong Kong (China) and Datatilsynet Norway — said the initiative reflects a shared enforcement priority. “By sharing strategies, authorities can address the risks of AI-generated imagery across enforcement, policy and education,” the coordinators told Tech Policy Press, calling it “a more powerful and holistic approach” to a global issue.

Grok as a test case

The statement does not name any specific company or product. However, it follows a series of formal probes into the Grok AI system developed by Elon Musk’s xAI and integrated into X, after millions of AI-generated sexualized images circulated on the platform.

The regulators pointed to recent developments as evidence of that coordination. In the past few weeks, Canada’s privacy regulator expanded an ongoing investigation into X Corp and launched a new probe into xAI. The UK Information Commissioner’s Office opened investigations into X Internet Unlimited Company and X.AI LLC, covering their processing of personal data in relation to the Grok artificial intelligence system. Hong Kong’s Office of the Privacy Commissioner for Personal Data issued an advisory offering guidance to the public on the safe use of AI chatbots to safeguard personal data, work that helped inspire a Crown Dependency Advisory on AI-generated imagery issued jointly by the privacy and data protection commissioners of Guernsey, Jersey and the Isle of Man. “This is an illustration of the power of partnership in action,” the co-chairs said.

The 61 signatories span Albania to Uruguay, but the United States, where xAI is headquartered, has no federal data protection regulator to join the coordination effort.

"I think the key issue is the fact that the US does not have a federal privacy law yet,” said Dr. Gabriela Zanfir-Fortuna, Vice President for Global Privacy at the Future of Privacy Forum. “Passing a federal privacy law should be a priority, especially as it becomes obvious that data protection and privacy rules have the force and breadth to tackle harmful AI practices for individuals through their tech-neutral approach, which is not chilling for innovation."

Limits and promise of international cooperation

How much the partnership can realistically deliver against platforms such as xAI, which is already under investigation in multiple jurisdictions, is the more difficult question the statement leaves open.

The statement commits authorities to sharing information “consistent with applicable laws.” The co-chairs acknowledged that “there are aspects of laws that can limit the sharing of information, such as personal data,” but said that “the vast majority of compliance activities can be discussed to some level,” and noted that many authorities have explicit legal provisions facilitating information sharing among international counterparts.

Past cases illustrate both the potential and the constraints of such collaboration. Investigations into Clearview AI were pursued in parallel by regulators in the UK, Canada, Australia and several EU member states. While some penalties remain under appeal and collection has been uneven, the co-chairs said that such actions have had "notable compliance impacts," citing Clearview's exit from the Canadian market and ongoing UK court proceedings over a £7.5 million fine.

Zanfir-Fortuna said the Clearview case highlights structural constraints. “The Clearview AI cases are a cautionary tale about the limits of extraterritorial enforcement of data protection laws, as they show that when push comes to shove enforcement is virtually impossible without the support of local authorities for the sanctioned company,” she said. She noted that the cases largely stemmed from individual national actions rather than coordinated EU-level enforcement.

Clearview was also unusual because it claimed it had no presence in Europe, meaning the GDPR’s “one stop shop” mechanism did not apply. “Most of the big AI developers do,” she said, adding that, for instance, for X, the lead authority is the Irish Data Protection Commission, which has already opened an investigation into Grok.

"'This cooperation within the EU is legally defined and mandated,' Zanfir-Fortuna said, ‘whereas the authorities signing the GPA declaration are not bound by an international mechanism with legal effects.”

That, she argued, does not limit what the declaration represents. “It shows the authorities have identified this is a big issue under privacy and data protection laws, they have the will to act, they have independence within their own legal systems and have shown in the past that voluntary cooperation can be effective, even if it results in separate enforcement actions.”

Privacy vs online safety enforcement on AI harms

The regulatory pressure may not come from one direction alone. As online safety authorities in several jurisdictions pursue investigations into Grok over AI-generated deepfakes, privacy regulators are approaching the issue through a different, and in some respects more established, legal lens.

“In contrast with online safety regulators, DPAs have the advantage of having rulebooks that are more mature, and of having greater experience in flexing their enforcement muscles,” said Owen Bennett, who specializes in international platform regulation. “We might see parallel actions from both DPAs and online safety regulators that seek to tackle the same compliance failure from two different angles.”

In the Grok case, he said, that could mean scrutiny over unlawful processing of personal data on one hand, and a failure to assess and mitigate content risks on the other. Similar parallel enforcement has already emerged in age assurance and age-appropriate design contexts, and may become more common as privacy and online safety harms increasingly overlap.

“When 61 data protection regulators, from every continent of the world, come together to send a message to industry, it sends a warning that regulators intend to use the mechanisms available to them to share supervisory insights and intelligence when companies don’t play ball on this issue,” said Bennett.

Authors

Ramsha Jahangir
Ramsha Jahangir is a Senior Editor at Tech Policy Press. Previously, she led Policy and Communications at the Global Network Initiative (GNI), which she now occasionally represents as a Senior Fellow on a range of issues related to human rights and tech policy. As an award-winning journalist and Tec...

Related

Analysis
Tracking Regulator Responses to the Grok 'Undressing' ControversyJanuary 6, 2026
News
Regulators Are Going After Grok and X — Just Not TogetherJanuary 26, 2026
Perspective
Chatbot Grok Doesn’t Glitch—It Reflects XJuly 28, 2025

Topics