India’s AI Safety Institute: Key Considerations for a Critical Initiative
Jameela Sahiba / Oct 22, 2024Jameela Sahiba is the Senior Programme Manager for the Artificial Intelligence Vertical at The Dialogue.
As the world grapples with the rapid evolution of artificial intelligence (AI), conversations around AI safety have picked up pace. Countries are working to ensure that the benefits of AI are harnessed responsibly, mitigating risks and safeguarding societal interests. India, with its growing AI ecosystem, is poised to take a significant step by establishing its own Artificial Intelligence Safety Institute (AISI). This move, expected to be a part of the Safe and Trusted AI Pillar of the IndiaAI Mission, is designed to ensure the responsible development of AI technologies while fostering innovation. As India develops its approach, what are the key elements that should shape the foundation of its AI Safety Institute?
Takeaways from Global Initiatives
India's AISI will not exist in a vacuum. There are a number of such institutes either launched or in development in countries around the world
- The United Kingdom’s AI Safety Institute (UK-AISI), originally founded as the Frontier AI Taskforce in April 2023, offers valuable insights. The UK-AISI was designed as a startup within the government, embracing a modern, flexible approach to staffing and operations. This model reflects the growing understanding that the agility of a startup is essential to stay ahead in the fast-moving AI landscape.
- The United States has taken a different route, housing its AI Safety Institute within the National Institute of Standards and Technology (NIST). Through the US AI Safety Institute Consortium, NIST has brought together over 280 organizations to develop empirical guidelines and standards. These collaborative efforts have laid a strong foundation for AI safety, with science-based approaches at the core.
- Singapore, meanwhile, has repurposed its existing Digital Trust Centre to serve as its AI safety body. With an emphasis on transparency and content assurance, Singapore’s model reminds us of the importance of clarity around AI content generation and dissemination, issues that are particularly relevant in the era of generative AI.
India would do well to adopt a dynamic framework, allowing AISI to remain adaptable as the AI landscape evolves. It should look to build a multistakeholder network, integrating academia, industry, startup and civil society to shape AI safety standards rooted in scientific rigor.
India’s Global Role and Domestic Imperatives
The establishment of AISI positions India uniquely to operate the first AI safety institute in the Global South, which would enable it to position itself as a bridge between developed and developing nations in shaping global AI safety norms. The 2024 Seoul Ministerial statement, which saw 27 countries and the EU, including India, reaffirm the importance of a collaborative approach to AI safety, is testament to the fact that international cooperation will be vital in the coming years. As part of this global network, India’s AISI should leverage its position to champion AI safety measures tailored to the needs of emerging economies while contributing to the broader international dialogue.
However, beyond global contributions, AISI must focus on India's domestic landscape. AI safety is a socio-technical challenge, and addressing it requires more than just technical solutions. Trust in AI is itself a socio-technical concept, deeply intertwined with cultural, social, and economic factors. AISI should work to integrate these perspectives, emphasizing the importance of interdisciplinary collaboration between technical experts and social partners. By doing so, India can lead in developing AI safety frameworks that are not only technically sound but socially relevant and inclusive.
Defining Scope and Structure
One of the most critical tasks for India is defining the scope of AISI. AI safety encompasses a broad range of issues—from the ethical design of algorithms to data privacy, security, and the mitigation of biases. In this context, India should prioritize enhancing the practical application of AI by implementing an application-level safety framework across sectors. This approach will ensure that AI is used responsibly and effectively, adapting to the unique needs of industries such as healthcare, finance, and logistics, where AI is rapidly gaining traction.
Further, AISI’s role should not be limited to safety testing and standard-setting but should also be advisory in nature, helping policymakers and the private sector understand and mitigate the socio-technical risks AI poses. As AI’s implications differ across cultural contexts, India’s AISI must be uniquely positioned to identify and address harms within India’s diverse social, linguistic, and cultural fabric. This focus on India’s unique context can also position the country as a world leader in AI safety, establishing best practices that other nations may follow.
To ensure broad engagement and representation, the hub-and-spoke model for AISI can be particularly effective. While the central hub will coordinate and drive the overall agenda, it is crucial that the spokes represent diverse stakeholder groups, including startups, large enterprises, academic institutions, civil society organizations, and government bodies. In collaboration with a broad range of stakeholders, AISI can work closely to develop and disseminate industry best practices, responsible AI use guidelines, and evangelize the importance of responsible AI practices across sectors.
In addition to this, to remain at the forefront of AI safety, AISI should be well-funded and staffed with cutting-edge researchers who continuously track and address emerging AI-related harms. By fostering this multistakeholder approach, India can build a comprehensive and inclusive AI safety framework, ensuring that voices from across sectors contribute to the development of robust, contextually relevant safety standards.