X Tried to Sidestep Brazil's Inquiry on AI Deepfakes. The Government Just Pushed Back.
Tatiana Dias / Feb 13, 2026Tatiana Dias is a fellow at Tech Policy Press.
Over the past several weeks, Brazil has been locked in a regulatory standoff with X over its generative AI tool, Grok, after the platform was found to have generated millions of sexualized images — including thousands depicting minors.
X admitted to the underlying vulnerabilities in a letter to Brazilian authorities last month, but has sought to limit its legal exposure by arguing that its Brazilian subsidiary, its parent company X Corp, and xAI LLC — the company that operates Grok — are legally separate entities, and that the @Grok account should be treated as any other user on the platform.
Brazilian regulators, lawmakers, and legal experts have now forcefully rejected that framing. After more than a month of back-and-forth, and with evidence that the problem persists, Brazilian regulators have escalated their demands, including a requirement that the company implement technical safeguards, backed by the threat of daily fines.
The Brazilian government demands answers
According to the Center for Countering Digital Hate, in 11 days of operation beginning on December 29, the Grok generated more than 3 million sexualized images – 23,000 of them appearing to represent minors.
In early January, PT, the ruling party’s left-wing congresswoman Érika Hilton, and Idec, Brazil’s Institute for Consumer Protection, sent requests to the Federal Public Ministry to initiate an investigation and suspension of Grok in the country.
Shortly after, X Brasil sent a letter to the Digital Rights Secretariat of Brazil's Ministry of Justice, marked as “confidential” by X – but later posted on the Brazilian government’s website. In the letter, X Brasil claimed that the @Grok profile is an account like "any other user account on the platform" – and, therefore, would be "subject to the same review, rules and policies." While acknowledging failures in its policies, it attempted to evade responsibility for Grok by stating that X in Brazil has limited operations in relation to X Corp.
The Digital Rights Secretariat responded harshly. It stated that X attempted to deflect its responsibility, mischaracterized the nature of its service and acknowledged that its pre-existing policies had failed, which, for the agency, "evidenced that the service was made available with a design defect." The agency then recommended coordinated action by other Brazilian government agencies against Grok.
The Brazilian government’s official response followed on January 20, with a joint recommendation from three Brazilian public agencies – the Federal Public Ministry, the National Data Protection Agency and the Consumer Secretariat of the Ministry of Justice – which demanded that Grok take measures to prevent the generation of sexualized images without consent, in addition to creating procedures to identify and remove such content.
Idec considered the recommendation insufficient and demanded tougher actions from the Brazilian government. "The mere issuance of recommendations, without any precautionary measure or suspension, is a license for violations to continue while bureaucracy drags on. It's like sending a polite letter to a forest fire."
X Brasil’s strategy to deflect responsibility
In a follow-up letter to the government on Jan. 27, the company doubled down, arguing that both entities operate independently of X Brasil, and that the @Grok profile is operated by the company xAI LLC. It further claimed that X Brasil "does not possess technical or legal means to intervene in the operation and functioning of the platform."
But according to some experts, X Brasil’s deflection has an objective: to frame the generation of sexualized images and deepfakes as user-generated content, and therefore absolve the X platform of responsibility.
"The claim is, obviously, legally very weak and mistaken because the integration between X and Grok is evident," says Yasmin Curzi, a legal postdoc at the University of Virginia. "Grok is natively embedded in X, is actively promoted by the company, and both X and xAI benefit economically from this integration."
Curzi also notes that joint data processing is likely still occurring, which, under LGPD, Brazil's data protection law, would constitute co-responsibility. "Both must respond jointly, considering that the common controller is the same person, Elon Musk."
X's response to Brazilian authorities is a familiar playbook used by big tech to resist liability in the country. The argument that corporate entities should be treated as legally separate was already deployed by Meta in a 2024 antitrust investigation. That same year, X itself used the same strategy in an effort to evade a law enforcement action, but the claim has already been rejected.
In that case, Musk decided not to comply with a Supreme Court decision to suspend X accounts linked to those involved in the attempted coup on January 8. The Court responded by asserting joint liability among companies within the same economic group. In addition to suspending X throughout Brazilian territory, it also ordered Starlink’s bank accounts to pay fines owed by X.
Mariana Valente, an assistant professor of international economic law at the University of St. Gallen, notes that this argument has not been accepted by Brazilian courts for a long time. For her, Grok’s case involves three legal dimensions: accountability for those who produce the images, the AI agents involved in creation, and dissemination on X.
Valente says it is interesting to separate the responsibility of each in these stages – the creation with AI agents, regulated by AI laws, and the distribution, regulated by social media and platform laws. But, in X's case, the complexity lies precisely in the fact that creation and distribution are on the same platform and belong to the same economic group. "You can't even say it's third-party content, because Grok is co-creating. The user asks Grok, which belongs to X itself, and it's automatically posted," she says.
X's slow response to mitigate the problem
In the letter sent on January 19, X Brasil said that, as soon as it was informed about the creation of sexualized content in Grok, it adopted "rapid measures" such as technical safeguards, policies and content moderation actions to deal with "certain inappropriate interactions with the @grok account by some users."
In practice, the company took nearly three weeks to respond to the problem. The @Grok profile posted an "apology" on December 31 about generating sexualized images of adolescents. But, according to the letter sent to the Brazilian government on Jan. 27, it was only on Jan. 20 that "comprehensive technical solutions were implemented to globally block the ability of all users to generate, through Grok or the @Grok account on X, non-consensual images of real people."
In the letter, the company also acknowledged the "complexity of implementing effective safeguards," mentioning vulnerabilities such as users who "attempted to circumvent existing safeguards" with "specific prompts" or "apparently benign" ones, in addition to the difficulty of "distinguishing different levels of clothing and different attempts to circumvent them."
The company also stated that, given the difficulty, it prioritized taking measures to prevent the generation of content of minors, and, later, of adults.
Moreover, X operators conducted "proactive searches" to identify and take action against accounts attempting to manipulate the @Grok account through interactions and content that violate the platform's guidelines. The company claims to have removed 5,300 posts and suspended 730 accounts.
Brazilian authorities, however, alleged that there's no concrete evidence, technical reports, or oversight mechanisms that could ensure the effectiveness of X's claims. “Preliminary tests conducted by the institutions’ technical teams indicate the persistence of failures,” they say.
An investigation from Reuters published Feb. 3 showed that, despite X's claims, Grok continues to generate sexualized images without consent. Six reporters, both men and women, submitted images of themselves to Grok between Jan. 14-16 and Jan. 27-28 – that is, after X's response to Brazilian authorities. They asked Grok to alter their photos into sexually provocative or humiliating poses. Grok generated sexualized images in 29 out of 43 attempts.
For Yasmin Curzi, Grok did not implement basic controls that are industry standards, such as filters for NSFW content, watermarks and CSAM detection, which already exist in Grok's competitors like DALL-E and Midjourney. "There was, in my opinion, a deliberate choice in the system's design and governance, a result of Elon's same fundamentalist free speech ideology."
"Unfortunately, we do not expect an immediate, genuine change in institutional posture. The pattern of big tech companies, and X in particular, has been to respond with vague promises that 'they are improving their systems' and 'have zero tolerance for abuse,' while, in practice, the damage continues to occur massively," says Luã Cruz, coordinator of the telecommunications and digital rights team at Idec.
Regulators push back
In a joint statement released on Wednesday, Brazilian authorities raised the stakes for the company. According to the Federal Public Ministry, X "was not transparent about the measures it claims to have taken in response to the reported violations, limiting itself to either generic information or information not specifically related to this year's Grok incident."
They have ordered X to immediately implement measures to prevent the generation and circulation of improper sexualized images. If the problem persists, harsher measures may be taken, such as daily fines and legal actions, including for damages caused.
Specifically, Brazilian authorities have determined that X must implement technical and organizational measures that include:
- Submission, within five days, of a proof of implementation of technical and organizational measures to prevent Grok from generating non-consensual sexual content;
- Monthly reports, starting in February, with details about the company's actions to suppress the production and distribution of deepfakes without consent; these reports must contain the number of harmful posts that were taken down and the number of accounts involved that were suspended;
- A detailed list of measures already adopted and a detailed metric report with verifiable quantitative data on content identified and removed, average response times, criteria used, and any corrective measures taken.
"Given the successive security failures and the platform's difficulty in complying with the law, rigorous measures have become the most viable path," says Idec's Cruz.
Implications for future internet regulation in Brazil
The Brazilian government's central argument is that X must demonstrate compliance with its "duty of care" and prove the absence of "systemic failure" that enabled the widespread circulation of harmful content on the platform. In its joint recommendation on Jan. 20, the government cited a Brazilian Supreme Court decision that reshaped the country’s platform liability regime.
Since 2014, Article 19 of the Marco Civil da Internet (Internet Civil Rights Framework) has largely shielded platforms from liability, holding them responsible only for content posted by users if they fail to comply with a court order requiring its removal. But last year, the Supreme Court held that Article 19 is partially unconstitutional, and platforms are now required to respond to harmful content and to actively monitor and suspend risky functionalities.
Since the decision is still subject to appeal, X sought to exploit this loophole. In its response, the company claims it already removes harmful content and argues that the government’s request that it prove its compliance with duty-of-care obligations anticipates a favorable decision by the Supreme Federal Court.
According to experts, Article 19’s change will obligate companies to monitor and suspend risky activities – and will represent "a fundamental step especially in combating non-consensual deepfakes of women's images that can be considered gender-based violence," says Curzi.
In the Grok case, however, this article is not the primary issue. Brazilian law already stipulates that, in cases involving intimate content, platforms can be held liable if they do not remove it immediately after notification. "Even if the decision is not in effect, material constituting CSAM is evidently illegal. As is the processing of data of minors without consent," Curzi adds.
The Grok case may also impact discussions around the AI bill currently under debate in Brazil's Chamber of Deputies, a proposed law that follows a risk-based regulatory model.
The bill already includes a prohibition of systems that facilitate the creation of CSAM. Curzi and Valente, however, argue that the classification of this type of tool without minimum safeguards, such as filters, should be considered "excessive risk," and generative AI image creation tools should go through a regulatory sandbox," preventing systems without controls from reaching the market," explains Curzi.
“These regulations would require companies to conduct assessments and tests before launching something on the market. Currently, this obligation doesn't explicitly exist,” says Valente.
Authors

