Home

Donate
Perspective

What the India AI Summit Leader's Declaration Means for the Future of the Digital Commons

Ramya Chandrasekhar, Renata Ávila / Mar 6, 2026

India's Prime Minister Narendra Modi, center, hold hands and poses for photographs with various head of states of participating countries during the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (Indian Prime Minister's Office via AP)

In February, India hosted the first global AI summit organized by a country in the majority world. The focus this year was “impact,” but as commentators have noted, the trend of Global South countries offering their resources, terrain and access to populations to help scale AI continues. This bargain is particularly stark in the Pax Silica Declaration, an alliance organized by the United States to secure critical minerals and cooperate on AI infrastructure, which India and other nations recently signed. Against this context, the Leader’s Declaration issued at the end of the AI Impact Summit needs to be unpacked carefully.

With 91 signatories (including China, the European Union, Russia, the US and the UK), the Leader’s Declaration pushes the idea of “wide-scale adoption of AI and AI-based applications” for economic and social good. But the Declaration goes on to identify certain other aspects — notably the need to democratize AI resources (including AI research infrastructures) to enable all countries to develop and deploy AI systems, adopt open-source and other publicly-accessible AI technologies where appropriate, advance trusted and secure AI technologies, and improve AI literacy.

As long-time supporters and contributors to the digital commons, we have long advocated for increased public recognition and support for such goals. But the non-binding Leader’s Declaration does not include concrete commitments for open source AI ecosystems or the digital commons. By contrast, the most recent trade agreements between the US and India, EU and Mercosur and the US and Indonesia force signatories to adjust their policy and legal frameworks for frictionless operation of commercial digital services and free flow of data, even stricter provisions for intellectual property and trade secret protection, and no real commitments for technology and capacity transfer, securing the dominance of the AI hyperscalers. Despite the platitudes in New Delhi, we are far from achieving the vision of shared global infrastructure that the Leader’s Declaration proposes.

At best, the Leader’s Declaration results in merely symbolic commitments to the digital commons while changing nothing in the AI innovation paradigm. At worst, it supports ‘openwashing’ and ‘commonswashing’ in the name of democratization of AI, justifying a frame for further extraction, where small nations invest resources and energy in growing their AI commons, in good faith, just to become a global training and experimentation resource at the service of the AI giants and where open source software and openly licensed content ends up in closed infrastructures.

The importance of open source

At the outset, the growing focus on open source AI ecosystems across the AI Impact Summit, and its recognition in the Leader’s Declaration, is welcome. This helps governments understand the collective, participatory nature of the foundations most of the current AI developments were built. Open resources are crucial to AI innovation. Openly licensed content, the digital public domain and peer produced online repositories serve as crucial pre-training and training resources, while the open web as a whole is central to crawling-based training techniques as well as retrieval-augmented systems for AI chatbots. Open source software libraries such as Pytorch, Tensorflow and scikit-learn are also crucial to generative AI innovations and are developed and maintained by collaborators from all over the World, on many occasions and for extended periods, as a pro bono activity that then leads to both commercial and social impact. In fact, some researchers estimate that the demand-side value of open-source software is $8.8 trillion, and that “firms would need to spend 3.5 times more on software than they currently do if OSS did not exist.” Seen this way, all AI is public.

Open source approaches can create vibrant innovation ecosystems. Through permissive licensing, replicability of open source software is guaranteed by ensuring source code is always publicly-accessible. Similarly with openly licensed datasets, modifications are authorised pre-facto, reducing copyright-related transaction costs for reuse. With model weights also licensed under permissive licenses, AI models can be configured by users and run on their local devices, where users only incur the cost of computing. The Open Source Initiative’s definition for open source AI attempts to take this one step further, requiring developers to release not only model weights, but also the training scripts used to create training datasets, as well as detailed information on the composition of training datasets, and there is still room for that definition to expand it further to also share the datasets. To this extent, researchers estimate that opting for open source models instead of closed models could have resulted in estimated consumer savings of $24.8 billion per year.

However, the AI stack has several dependencies — ranging from the computing infrastructure to the model, the training data, and finally, the application or tool such as chatbots. Open source software, such as scripts for training data, annotation and anonymization or other Python scripts essential to machine learning pipelines, is the product of peer production. But investments by Big Tech firms are concentrated on the edges of the stack — in acquiring compute and data centers, and in creating AI-driven applications. The Frontier AI Commitments announced at the opening of the India AI Summit further drive home this point — where frontier AI companies issued voluntary commitments on equitable and responsible AI, but the goal is to ensure hyperscaled AI deployment across sectors with little to no regard for the human or environmental costs. On the tail of the India AI Summit, India’s budget for 2026-2027 also approved several tax breaks and benefits for foreign hyperscalers to invest in data centres in India, often with local Indian companies as partners.

This means that open source software and open data (and the volunteer labor underpinning them), as well as the training of systems provided by the Global Majority populations that will have to engage with these AI for “their good,” become resources that Big Tech (or now frontier AI companies) can use at zero cost. In the absence of considerable public investment in open source, this creates serious extraction and sustainability issues.

We need to learn from and expand small, palliative initiatives such as Germany’s Sovereign Tech Agency for public investment in maintenance of open source software, and apply this to other jurisdictions. Countries and regions should invest together far more not only to support maintainers, but to develop viable, commons-friendly, sustainable digital industrial policies that enable them to pool resources, exchange practices, upskill their citizens and truly depart from the fragile AI dependency they are locked into.

Only replicability and open access does not guarantee a commons

There is also a more fundamental issue regarding the conflation of openness and the digital commons. Open source software, openly licensed datasets, open weights and even the OSI’s open source AI definition are all oriented towards replicability of the underlying artefacts, and the freedom to modify and run versions of the artefact locally. But to create a commons of technology, replicability needs to be combined with sustainable governance, where value goes back to communities and the communities have a say about the technology roadmaps, sustainability strategies and red lines. The Digital Public Goods alliance, for instance, maintains a registry of open source digital solutions, but also notes that the use of these solutions needs to be undertaken with regard to environmental sustainability and fundamental human rights, be supported by an active community behind it and be maintained regularly.

In global advocacy and policy however, the political connotation of the digital commons as heterogenous institutional arrangements for decentralised and communal production, has often been watered down to a repository of open-access resources created by volunteer labour, which can then be exploited by the market and, more often than not, without any contribution back. This has been flagged by several commons researchers and practitioners in relation to open source software and open access publishing, as well as the model of platform-mediated API-based access to open creative and cultural content on the Internet.

But even where digital commons flourish, there are also numerous instances of extractive use of the digital commons, which is a growing phenomenon given the current degree of AI hype. Peer producer platforms such as Wikipedia have reported an overwhelming amount of traffic from web crawler bots. This has prompted them to adopt new strategies for sustainability, such as charging AI companies for API access to Wiki datasets, while the underlying Wiki content remains permissively licensed from a copyright perspective.

Further, open source and digital commons initiatives have also been critiqued for marginalizing social, environmental and epistemic justice aspects. For instance, researchers have investigated the lack of proper filtering and screening techniques for free publicly-accessible web crawl datasets, such as Common Crawl (which is a popular source for training data), resulting in toxic data circulating within AI systems.

Instead of frontier AI, we need more concrete public commitments for the digital commons and an AI commons stack

That the digital commons have enabled a new form of AI innovation is testament to their centrality . Digital commons are the ultimate, tested model that enables a different digital architecture. But value needs to flow back — to the peer producers, the digital commons and to the online information ecosystem as a whole. And this value cannot only be in the form of investments in national economies for AI infrastructure, through data centers, mining of critical minerals, or a relaxed regulatory environment for crowdsourced data annotation.

Value needs to go back to the communities that are the heart of creating and curating digital commons, and to the institutions that have been custodians of knowledge and culture, which are now struggling to continue and transform their missions because of constant underfunding. Libraries, research centers, and public interest archives are some crucial examples of such institutions that should benefit from value reparations, not only the commercial sector. Some commentators have proposed an AI tax or levy, where proceeds can be used to strengthen cultural and knowledge institutions, and fund AI and digital literacy initiatives.

But even at the national level, current public expenditures invested towards the dominant AI actors, which are providing services to many of these governments at scale, can be redirected to fund and support local efforts for cultural and knowledge preservation, even if gradually. This is crucial to realize the promise of democratizing AI resources, as set out in the Leader’s Declaration. In the absence of strong commitments to this effect, the Leader’s Declaration ultimately supports an extractive, hyperscaled AI ecosystem, portraying it as in the public interest and eliminating any resistance or criticism to large-scale deployments of AI.

In fact, nations should prioritize a more ambitious approach to their AI commons as a form of strategic and critical infrastructure. Because modern AI is trained on massive datasets, it primarily reflects the languages and cultures that have well-funded, digitized, and open infrastructures. If governments fail to invest in their own digital knowledge commons, their local histories, languages and technical expertise are absent from the training data used by AI. This is resulting in entire regions, particularly the Global South, becoming invisible and misrepresented by AI tools, further marginalizing their voices in the global digital economy even as it requires investing resources in models distant from their culture to build local solutions. At the level of public regulation and public policy, there is a need for concrete support to nourish the collective capacity to engage in cultural production.

For public-sector and social-sector efforts, a new AI literacy approach is necessary, but the Leaders' Declaration does not go beyond upskilling or reskilling for bureaucratic efficiency or economic gains. Most AI literacy is limited to explaining the current status quo and presenting it as the only possible path, one that is generally taxing on the environment, costly, requiring constant updates and large-scale investment, and focused on industry-centric gains. A different AI literacy, adapted to local needs, open to the array of possibilities for different approaches, and guided by societal goals and critical of the tradeoffs of deploying these new technologies could substantially change the game and deliver the so-called ‘AI for Good’ that the declaration aims at. The goal should be to provide nations with the tools and support to test different approaches and to develop the institutional confidence and agency needed to benefit from the latest technology, all while preserving influence over the tools being deployed and providing a say on how they are governed.

Finally, the Leader’s Declaration also takes note of a voluntary initiative known as the Global AI Impact Commons — “a practical platform to encourage and enable the adoption, replication, and scale-up of successful AI use cases across regions,” including open source tools and AI models. But beyond simply a repository of openly licensed software, datasets, weights and models, any effort for an AI commons needs to take the issue of sustainable governance frameworks seriously. This is a precondition for any collective effort involving up to 91 nations to truly deliver any trusted and equitable, lasting and sustainable outcome.

Global knowledge sharing on the best governance models for open-source AI innovations is necessary. And governance models need to involve all of the publics affected by AI, not just governments and other elites that want to profit from AI scaling and do business with the few companies that control the AI ecosystem at the moment. New licensing frameworks for community-created datasets are encoding new reciprocity obligations, requiring reusers to contribute value back to the commons, either through financial contributions for maintenance or by sharing infrastructure. These can be combined with institutional structures such as data cooperatives, data trusts and data unions, that embed participative decision-making, values of equity, care and social justice, can monitor compliance with these licenses, and also negotiate and facilitate interoperability.

Authors

Ramya Chandrasekhar
Ramya Chandrasekhar is a doctoral researcher at the Center for Internet and Society of the French National Center for Scientific Research, an associate researcher at the Open Knowledge Foundation, and a member of the Digital Commons Policy Council. In her research, Ramya studies legal strategies for...
Renata Ávila
Renata Ávila is an international human rights and technology lawyer, as well as a leading advocate for openness, access to knowledge, and digital justice. She currently heads the Open Knowledge Foundation (OKFN), a global non-profit dedicated to unlocking the value of open data and knowledge for soc...

Related

Perspective
AI Impact Summit Commitments Must Counter ‘AI Unreality’ in PoliticsFebruary 27, 2026
News
Why Are AI Giants Betting On India?November 24, 2025

Topics