Home

Donate
Perspective

India’s New AI Governance Plan is Much Ado About Nothing

Amber Sinha / Nov 21, 2025

Amber Sinha is a contributing editor at Tech Policy Press.

Indian Prime Minister Narendra Modi addresses armed forces personnel during a Diwali celebration onboard the INS Vikrant on October 20. (Prime Minister's Office)


India’s Ministry of Electronics and Information Technology earlier this month released governance guidelines for artificial intelligence, marking the first major regulatory document that has emerged in the country — if one can call it that.

Following an earlier draft in January, this 66-page final report confirms that the Indian government plans to create a regulatory framework for AI that focuses largely on industry self-regulation.

In fact, while introducing these guidelines, Ministry of Electronics and Information Technology Secretary S. Krishnan said that there had been a “conscious and deliberate approach of not leading with regulation” instead emphasizing the overall hands-off approach that the Indian government has taken when it comes to regulating AI.

The timing of the release was significant, as the guidelines would enable the Indian state to present a governance framework at the India AI Impact Summit scheduled in New Delhi in late February, where many of the world’s top decision-makers on the issue will descend. It would be more useful to see these guidelines unveiled in conjunction with a broader set of policy and industry interventions in the run up to the summit, including the India AI Mission, the Indian government’s primary AI development project.

A focus on DPI

Much of the guidelines are focused on Modi’s whole of government approach. They seek to ensure clear channels for communication and coordination between different arms of the government, to build capacity for the bureaucracy and direct regulators to make sure AI is used responsibly in the public sector.

While the AI guidelines are light on details, the document signals the Indian government’s overall vision on AI policymaking.

This vision views all emerging technologies from the lens of digital public infrastructure (DPI). The issue of infrastructure lends itself naturally to AI policymaking — the issues of developing GPUs, increasing access to datasets and expanding AI infrastructure feature prominently.

However, unlike the discussion on AI infrastructure in most other countries, the Indian policy document also views AI as an extension of its existing digital infrastructure. Amid concerns over India’s ability to develop globally competitive AI solutions, this may help India leverage its thought-leadership on DPIs and position itself as relevant in the AI race.

The guidelines clearly recommend the integration of AI with Aadhaar, India’s national digital identity system; UPI, the state endorsed mobile payment infrastructure; and DigiLocker, a personal online repository the government made available to Indian residents that is used primarily for authentication purposes. India’s existing digital language repository, Bhasini, also finds a key mention. Similarly, India’s regulatory technology privacy initiatives, such as DEPA (Data Empowerment and Protection Architecture), a techno-legal system for permission-based data sharing, is touted as a potential solution to the privacy-innovation tradeoff questions that have dominated policymaking around AI globally.

Accountability and liability

The document is conspicuously light on accountability and liability questions.

While acknowledging the risks posed by the use of AI, it largely passes the buck on taking greater action, concluding that “many of the risks associated with AI can be addressed under existing laws.” In such cases, it diagnoses the primary problem as a lack of predictable and timely enforcement. However, even on this issue, aside from some homilies about regulators needing to be vigilant, it offers no substantive contribution.

On the question of liability, the guidelines appear to recommend a graded system of penalties, while in the same breath warning against accountability mechanisms stifling innovation. The guidelines immediately move towards self-regulations and have prescribed the setting up of an AI Governance Group (AIGG).

Any specificity in the guidelines is reserved for the issue of regulating deepfakes. It suggests that the AIGG could review tools for watermarking and labeling AI-created material to pinpoint its source, potentially tracing it back to the specific databases or large language models (LLMs) used to generate it. This is a clear reference to the latest amendment to the IT Rules (2021), which mandates labeling by platforms.

Together, these facets clearly align with India’s hands-off approach to AI regulation, choosing to give even some of the more controversial use-cases such as facial recognition in public services a wide berth. Only the politically sensitive issue of deepfakes has so far been singled out for regulation.

While the guidelines envision self-regulation, it remains to be seen what form ensuing industry codes of conduct or model regulations may take.

Aside from allowing the India government to tick off the need for a governance framework at February’s AI Summit, they do not achieve much.

Authors

Amber Sinha
Amber Sinha is a Contributing Editor at Tech Policy Press. He works at the intersection of law, technology, and society and studies the impact of digital technologies on socio-political processes and structures. His research aims to further the discourse on regulatory practices around the internet, ...

Related

Perspective
AI Procurement and the Capture of Public PurposeNovember 17, 2025
Perspective
India’s Search for Digital SovereigntyOctober 30, 2025
Podcast
Technology and Democracy in the New IndiaAugust 17, 2025
Analysis
Trump's Tariffs and the Politics of India's Withdrawal of Digital Service TaxesSeptember 2, 2025

Topics