India’s AI Governance Guidelines Report: A Medley of Approaches
Sriya Sridhar / Jan 16, 2025Shortly after the Indian Ministry for Electronics and Information Technology released a draft of the long-awaited Digital Personal Data Protection Rules, 2025, it also released a report on ‘AI Governance Guidelines Development’ by a subcommittee constituted under the Advisory Group for an ‘India-specific regulatory framework’ for AI. This comes on the heels of news the Ministry is also considering setting up an AI Safety Institute in India.
The subcommittee's report has many laudable suggestions and recommendations which merit attention. What is interesting is that several recommendations in the report, if in fact operationalized, might deviate from the common refrain of a ‘looser’ or ‘light-touch’ regulatory approach for AI, which the government has been advocating for. However, the subcommittee report also suggests several distinct regulatory approaches – the objectives of which might not be so easily reconcilable – and also retains a broad refrain of ‘voluntary’ commitments, which may dilute the intentions of its recommendations.
Broad approach
The ‘whole-of-government’ approach is repeatedly emphasized in the report, which involves inter-ministerial coordination for AI governance to effectively monitor, share information, and develop policy frameworks, which is better than a ‘fragmented’ approach among different sectoral regulators. This is encouraging, given that the deployment of AI touches upon so many different areas of the law – from intellectual property to consumer protection and data protection. To this end, a good start would be to form the committee or group on AI governance that the subcommittee recommends with representatives from various sectoral regulators (for example, the Reserve Bank of India and the Telecom Regulatory Authority of India).
The suggestion to establish a ‘Technical Secretariat’ to develop a ‘systems-level understanding’ to identify gaps, develop metrics and protocols for AI accountability, and develop an AI incident database is also a good one– tracking AI harms (such as through an AI incident database) is crucial for an understanding of precisely where the pain points are for citizens and consumers. Overall, the report stresses the increase in regulatory capacity. It will be left to be seen how this is implemented, especially with long delays in other areas such as data protection, where the proposed Data Protection Board has not been constituted and its powers significantly diluted in the final form of the Digital Personal Data Protection Act, 2023.
The need of the hour is for legal recognition and the development of remedies against AI harms. To this end, the subcommittee repeatedly emphasizes that ‘minimizing harm’ needs to be the ‘core focus’ of regulatory intervention into AI. This is an interesting stance coming out of India and aligns closer to efforts like the EU’s AI Act, contrasting with a more ‘pro-innovation’ approach in countries like the UK. However, it is unclear whether the subcommittee advocates for a statutory instrument like the AI Act to enforce this.
The report also mentions that a rigid definition for an AI system is not conducive to ‘future ready’ regulation – which, incidentally, aligns with the UK’s approach – and does lead to ambiguity on whether India is going in a technology-specific or technology-neutral direction for AI regulation. Looking at the domains of intellectual property, cybersecurity, and AI bias and acknowledging the need for frameworks to determine liability and to protect against harms in these respective domains, it is again unclear what type of regulatory instrument the subcommittee thinks is best suited, which raises important questions. A big miss, however, is the consideration of the use of personal data for the training of AI models, and only a cursory mention of data protection legislation in the context of cybersecurity.
Different regulatory strategies: Can they meet?
The subcommittee’s exploration into different methods of regulating AI leads to many suggestions, which consist of at least three separate regulatory approaches. The first is a principle-based approach based on the OECD’s AI principles and a separate set of principles developed by Indian industry bodies. To operationalize these principles, the report suggests adopting a ‘lifecycle approach’ since risks play out differently during different stages of the AI lifecycle. This also involves taking a holistic view of actors in the AI ecosystem, such as developers, deployers, and end-users.
The second approach is a ‘techno-legal’ approach, where ‘legal and regulatory regimes are supplemented with appropriate technology layers.’ This involves encoding legal provisions within technology, akin to privacy by design provisions and provisions in the EU AI Act on the design and development of AI systems. The Committee also adds that these approaches could include using technology to ‘identify the allocation of regulatory obligations/liability across the value chain,’ perhaps through the development of audit trails and traceable artifacts.
The third set of approaches it suggests is entity-based regulation (a licensing/authorization regime), activity-based regulation (based on the sector of deployment of the AI system), or a combination of both. The subcommittee finds that, to begin with, activity-based regulation is more suited to further the objective of harm mitigation.
The consideration of different strategies is welcome. But, these approaches may not be so easily reconcilable, and the way forward is unclear. For example, the principle-based approach is similar to the UK’s approach to AI regulation advocated for in its White Paper on AI regulation, where a non-statutory, principles-based framework is suggested to provide flexibility and promote innovation. Principle-based regulation has also proved to have many drawbacks in areas such as data protection. Entity and activity-based regulation is closer to the EU’s AI Act, which is a statutory instrument that enforces legal obligations for the design and development of AI systems through a product-safety framework and through the prohibition of AI systems deployed for certain use cases that could affect fundamental rights.
A techno-legal approach, insofar as it involves the regulation of AI systems by design, could theoretically be included in principle-based regulation or entity/activity-based regulation, but it is unclear whether the use of the term ‘techno-legal’ in the report refers to regulation by design, or using technology only to improve regulatory capacity. This is a crucial distinction and would make all the difference in whether there are obligations placed on AI deployers through the development process to address the several harms that the subcommittee acknowledges can arise in different domains and to bring in legal provisions that make AI systems interpretable and explainable, which are mentioned as a part of the principles.
Where it could all be diluted
The report repeatedly mentions that self-regulation and voluntary commitments are desirable. As I previously argued, this is insufficient to bring in the level of accountability and transparency required of AI deployers and developers. This is particularly due to the disruptive potential of AI that the subcommittee itself acknowledges: premising the regulation of AI purely upon voluntary commitments does not create effective obligations vis-a-vis citizens. It runs the risk of the bare minimum being done by AI deployers, with much information remaining in a black box. Given the need to protect consumers, minorities, and others subjected to the decisions made by AI systems, self-regulation is insufficient to ensure AI safety, removes AI regulation from the scope of democratic participation, and relies on the goodwill of tech companies.
Varied ways forward
AI harms are not a matter of the future; they are already a part of everyday life. These harms are not only individual but societal. It is heartening to see recognition of these harms in India when legislation like the Data Protection Act has removed key definitions and provisions that would have legally recognized certain forms of technology-based harms. Overall, the subcommittee’s report does provide some encouraging direction for the future of Indian AI regulation in terms of regulatory capacity building and harmonizing different frameworks where the impact of AI will be felt. Of course, whether the potential for these recommendations is realized depends on implementation, and it will be interesting to see how these varied approaches are reconciled. However, the effect of the proposed measures could be effectively diluted if they are all premised upon voluntary commitments and reporting.