What It Will Take for India’s New AI Governance Guidelines to Work
Shefali Malhotra / Dec 3, 2025The race to govern artificial intelligence (AI) is intensifying across the world. From Brussels and Beijing to Washington, Kigali and Brazil, governments are vying to shape the rules for a technology that is steadily permeating economies, societies and geopolitics. India’s newly-released AI Governance Guidelines stake a claim in this global contest, aiming to lead in shaping norms rooted in safety, inclusion, and the public good.
The guidelines propose a governance framework to build infrastructure, strengthen human capacity, plug regulatory gaps and enforce accountability across the AI ecosystem in India. They merit recognition for embedding trust, inclusion, fairness, accountability, safety, resilience, sustainability and a people-centered approach at the heart of their governance model. Yet, the principle of ‘innovation over restraint’ risks hollowing out the very safeguards needed to make those values real. One clear manifestation is the thrust on voluntary measures by companies, such as adopting Responsible AI principles within organizations, making collective industry pledges, adhering to technical standards, and self-certifications, as a means to address and mitigate the risks of AI. In short, the guidelines bet on self-regulation.
But why regulate at all? Because AI is both a driver of innovation and a vector of risk. If left unchecked, AI systems can amplify bias, entrench discrimination, compromise safety, erode rights, and exploit systemic vulnerabilities. These risks are playing out in India right now. For example, just weeks ago in Faridabad, a city in the Northern state of Haryana, a 19-year-old student died by suicide after allegedly being blackmailed with deepfake images. In New Delhi, a 14-year old child was misdiagnosed by ChatGPT when the chatbot reportedly mistook symptoms of an anxiety attack for a gastric infection. During India’s 2024 elections, AI tools were widely deployed not only for voter outreach but also for disinformation and deception. Regulation is about guiding AI innovation while building the guardrails to protect against these harms.
So, is the trust in the industry to regulate itself well-placed or just wishful thinking? The track record points to the need for caution, not confidence. Indian regulatory history suggests self-regulation often breeds conflicts of interest and shields industry priorities over the public good. From the corruption that plagued the now-defunct Medical Council of India to the Satyam and Nirav Modi scams, each episode highlights the inherent limits of industry-led oversight. In the AI space, voluntary commitments have similarly faltered. As an example, a 2024 MIT Technology Review assessment of the White House’s AI pledges found support for technical gains, such as increased adoption of red-teaming and watermarking, but glaring failures in enhancing consumer protection, preventing non-consensual use of personal data and reducing AI’s environmental footprint.
To their credit, guidelines leave the door open for mandatory obligations in the future. But whether that shift arrives in time, or with enough teeth, remains uncertain. In their current form, voluntary commitments may stand a chance only if backed by enabling conditions. The first is a competitive market that provides the structural incentives for companies to act responsibly, build consumer trust, avoid reputational harm, ensure easy market access and offer alternatives to consumers. However, a recent market study by the Competition Commission of India shows multiple levels of the Indian AI ecosystem dominated by a handful of large players, particularly the Big Tech, who often engage in anti-competitive practices such as exclusive partnerships and self-preferencing. This dynamic is likely to pose challenges for accountability especially in the absence of enforceable safeguards.
The second condition is a shared commitment to purpose-driven innovation, one that prioritizes societal goals over the imperatives of speed and scale. This is particularly critical in a resource-constrained country like India, where government investment on AI innovation is steadily growing. Yet, there is little transparency on how these funds are allocated or what outcomes they achieve. A 2019 report on “Leveraging AI for identifying National Missions in Key Sectors,” published by the Ministry of Electronics and Information Technology, offered a roadmap for identifying problems most in need of and amenable to AI solutions. The need of the hour is to re-center policy on these goals so that innovation serves developmental priorities and rights-based safeguards, lest it become a wasted opportunity. To this end, the people-first approach, embedded across design, development, deployment and post-deployment, will help anchor innovation in the lived realities of Indians.
The third condition is an infrastructure and methodology for holistic risk assessment that enables governments, developers, deployers, users and the public to clearly understand which risks are being addressed and which are not. The guidelines rightly flag intrinsic risks, such as bias and discrimination, and systemic risks arising from market concentration, geopolitical instability and regulatory changes. But the scope must widen to include factors such as labour displacement, environmental degradation, loss of democratic oversight, and cultural and psychological risks. Without accounting for these dimensions, risk governance will remain blind to real-world harms. A practical first step is to institutionalize the proposed incident reporting framework, a national database of AI harms, that will inform risk assessment and classification for targeted regulatory interventions. But to be effective, reporting should be mandatory, transparent and publicly accessible.
The fourth necessary condition is robust transparency and disclosure requirements backed by independent audits. In the absence of legally enforceable risk mitigation measures, the guidelines rely on softer mechanisms, such as transparency reports, self-certifications, peer monitoring and third-party audits, to support adherence. These tools should guard against risks, including good actors carrying the compliance burden while bad actors slip through, and dominant firms bending standards to their advantage. For them to work, clear audit rules and independent oversight of the audit process are essential. Without this, transparency risks becoming symbolic rather than substantive.
The final condition that must be in effect for voluntary commitments to be effective is access to redress in cases of AI-related harm. India’s AI guidelines leave grievance redress mechanisms entirely to the discretion of AI deployers, offering no clarity on the nature of grievance committees, processes or remedies. Without such detail, aggrieved users remain unlikely to secure meaningful redress, leaving accountability weak and harms unaddressed.
The AI Governance Guidelines mark another step in India’s long trail of reports aimed at shaping an AI governance framework. However, the true test of India’s model will be whether it can maximize the benefits of this technology, while delivering tangible protections for its citizens and building public trust. Only then can India credibly lead by example in the global AI race.
Authors

