Analyzing Regulatory Gaps Revealed by India’s Response to the Grok Debacle
Avanti Deshpande, Jhanvi Anam / Jan 21, 2026
Union Minister Ashwini Vaishnaw briefs the media in New Delhi on Wednesday, March 5, 2025. (Kamal Singh/PTI via AP)
What happens when powerful AI tools are released without safeguards into platforms used by millions? This question has occupied headlines after Grok, the generative AI chatbot integrated into the social media platform X was weaponized to non-consensually create and share sexually explicit and degrading images of women and children. The proliferation of such images on X was a direct result of the introduction of an image generation and editing feature to Grok in December 2024. Grok’s subsequent integration with X and the introduction of a “spicy mode” last year exacerbated the abuse by enhancing access and dissemination of such non-consensual intimate imagery (NCII).
The Grok incident has rightly triggered widespread outrage across jurisdictions, and regulators around the world are taking action, with responses ranging from blocking Grok entirely, such as in Indonesia, to launching an investigation into its functioning, as in the United Kingdom.
In India, the Ministry of Electronics and Information Technology (MeitY) issued a letter to X on January 2, 2026, citing a failure to comply with statutory due diligence obligations under the Information Technology Act, 2000 and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Apart from adherence to the legal framework, MeitY demanded a report on the steps being taken by X to address the issue within 72 hours from the issuance of the letter. However, while the ministry’s response has been relatively swift, there are several deeper and systemic issues that it has exposed.
First, this response reveals structural problems in how India is currently attempting to govern AI-driven harms. MeitY has not initiated a dedicated regulatory or investigative process for Grok as an AI system. Instead, it has relied almost entirely on the existing intermediary liability framework under the IT Act and the IT Rules to look into the matter. Earlier, MeitY issued an advisory to social media intermediaries on December 29, 2025 which warned against the hosting, uploading, and transmission of obscene, pornographic and other unlawful content and advised them to undertake a review of their internal compliance frameworks. Through both the advisory and its January 2 letter to X, it is evident that MeitY’s approach to this incident is that of a failure of platform compliance with legal obligations and due diligence requirements. Simply put, the government response is built around takedowns and platform enforcement routed through the intermediary liability regime.
Additionally, the government is currently deliberating on introducing the Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, to combat the rise of deepfakes. These rules would make labelling of all synthetically generated information mandatory. While this step is well-intentioned, concerns related to compelled speech, ambiguous definitions as to what constitutes “synthetic content,” and fears over increased censorship powers that would add to the existing safe harbor framework have been raised. This approach reflects a broader reluctance to regulate and place binding obligations on AI systems and stakeholders within the AI ecosystem. The general sentiment of anti-AI regulation prevalent in India points to apprehensions that the imposition of ‘bureaucratic fetters’ will hinder the development and adoption of AI in India.
The government’s attitude on wider AI regulation can be seen in the India AI Governance Guidelines released in November 2025. While the Guidelines, which were released under the India AI Mission, acknowledge the risks posed by AI, they largely defer to the extant legal regime, stating that “a separate law to regulate AI is not needed given the current assessment of risks” and that the risks associated with AI can be addressed through voluntary measures and “existing laws.” However, if there’s one thing that the Grok episode highlights, it is that not only has the market failed to regulate itself, but the existing laws which address platform governance have failed to effectively address AI driven harms such as sexualized deepfakes. This has, in turn, exposed a dire need for regulation.
Grok, as a generative AI model capable of producing illegal and abusive content on demand, is not directly regulated as an AI system, but is only regulated indirectly through X’s obligations as a social media intermediary. If similar harm were to occur on stand-alone AI platforms that are not intermediaries under the IT Act, for instance on other generative AI chatbots, there is a real risk that this would fall into a regulatory grey zone. This leaves India without a clear legal framework to require pre-deployment testing, built-in safety and consent guardrails, or any independent oversight of high-risk generative AI tools.
At the outset, to drive constructive conversations around AI, the Indian government needs to drop apprehensions that a regulatory approach is likely to be perceived as a backward response to emerging technologies. Further, there is an urgent need to move past the reductive, binary narrative that regulation strangles innovation, as this line of thinking leads to the adoption of a light-touch regulatory and self-regulatory codes administered by industry without oversight, an approach that results in episodes such as the Grok incident. Instead, the way forward ought to be one that embraces regulatory responses that lead to tangible accountability from all stakeholders in the AI value chain. To do this, a participatory approach to AI governance is essential. The government ought to consider conducting an open, public, multi-stakeholder consultation that would expand the conversation of AI regulation beyond the framework of the IT Act as a good first step in this direction.
Unless it is accepted that systemic changes need to be made to address AI-enabled harms such as NCII, a platform moderation approach is likely to change little. What is necessary is prioritizing ‘Safety by Design’ and mandates for adversarial testing (so-called red teaming), adherence to technical standards (like C2PA) to label AI-generated content, addressing the existence of NCII and Child Sexual Abuse Material (CSAM) in training data sets, and an investment in the development of tools that detect and report such content. Otherwise, any promises to tackle AI-generated sexual abuse would ring hollow.
Lastly, it is also important to touch upon another issue that plagues the content moderation approach for NCII in India, that of framing the issue merely in terms of “obscene” or “vulgar” content. This approach misses the core harm involved in image-based sexual abuse: a complete lack of consent. The defining feature of NCII is not the subjective assessment of whether a particular image or video crosses the threshold to be deemed as “obscene” or “vulgar.” Instead, the primary violation is the creation of such an image in the absence of any meaningful consent of the person who is depicted. This violation continues to subsist regardless of whether the content is considered to be obscene in nature or not. Reducing such abuse to a question of obscenity collapses this distinct harm with the imposition of moral and socio-cultural standards. Therefore, a framework based on consent and a rights-based understanding arguably offers a better path forward that safeguards the interests of victims. It would allow regulators and platforms to respond to abuse in a victim-focused way while still respecting constitutional free speech protections.
The Grok incident was an easily predictable outcome of a governance model that allows powerful AI systems to be deployed at scale without enforceable, ex-ante safety obligations. When such tools which have the capacity to shape behavior, reputation, personal autonomy, dignity and safety are made available in the mainstream, the harm they can cause is not confined to a few users but has a ripple effect that is difficult to contain. While X admitted to failures and stated that it had taken down the offending content, this was a reactive measure taken only after large volumes of harmful material had already been generated and widely circulated. Moreover, neither the full scale of the harm, the number of victims who were affected, nor the adequacy of the fixes has been independently verified.
Reportedly, MeitY was dissatisfied with the platform’s initial response as well. Meanwhile the broader issue of the dissemination of NCII and CSAM continues to remain unresolved and risks being forgotten. We await MeitY’s further action on this matter.
Authors

