Home

Donate
Perspective

Abundance vs. Scarcity: Who Controls the Internet After AI?

Paul Keller / Jan 22, 2026

Hanna Barakat & Archival Images of AI + AIxDESIGN / Better Images of AI / Weaving Wires 1 / CC-BY 4.0

As with any contested space, there are multiple competing visions for the future of the internet.

Over the course of 2025, it has become increasingly clear that the emergence of generative AI as a new socio-cultural technology is not only disrupting creative practices and business models, but is putting the sustainability of the entire information ecosystem under strain.

The internet is now under unprecedented pressure from the AI companies that have been unleashed by the very openness and scale the internet enabled. There is a real risk that the information ecosystems that have formed around the open web over the past two decades will be devoured by the generative AI systems they have helped bring into being.

The underlying diagnosis is by now relatively widely shared. Across the ecosystem, there is growing agreement that existing forms of public information production and distribution are unsustainable under conditions of increasingly AI-mediated access. From Wikimedia and cultural heritage institutions to fellow think tanks, coalitions of media organizations, and many other parts of the cultural and creative industries, warnings are multiplying that the arrangements sustaining information production are beginning to fail. Long-standing, if often fragile, reciprocal relationships are being eroded in the race to dominate new paradigms of information access. Broadly speaking, two distinct visions are emerging for how societies might respond to this moment.

Abundance and redistribution

In this scenario— outlined in Open Future's Beyond AI and Copyright white paper — societies accept that digital information is fundamentally abundant and stop trying to recreate scarcity where none naturally exists. Instead of forcing information back into market forms at the point of access, this approach focuses on redistribution at the point of AI deployment. Generative AI systems derive their value from the digital commons—public-domain works, openly licensed content, publicly funded research, cultural heritage collections, and everyday online expression—most of which sits outside traditional copyright markets. The response, therefore, should not be tighter control, but collective mechanisms that ensure that the additional value created by a few powerful actors flows back into the institutions and communities that sustain information production.

In practice, this would mean levy- or tax-based redistribution tied to the commercial deployment of AI services, combined with sustained investment in public AI infrastructure—models and systems governed in the public interest rather than by commercial incentives alone. Openness remains the default. Public-domain works stay public. Access is managed for sustainability, not weaponized to extract rents. AI becomes a layer that expands access to knowledge while reinforcing—rather than hollowing out—the institutions that produce, curate, and contextualize it.

Scarcity as a market strategy

The second scenario also starts from the recognition that the traffic-based web economy is breaking down as search gives way to AI-mediated access. But it is being advanced primarily by a different set of actors—large infrastructure providers, content delivery networks, and publisher coalitions—whose response is to re-engineer scarcity at the point of access. This vision has been articulated most explicitly by Cloudflare CEO Matthew Prince in a Stratechery interview.

In this vision, the collapse of the old bargain is treated not as a reason to rethink how value is redistributed across the information ecosystem, but as an opportunity to turn information into a tradable input once again. Moving content behind paywalls, restricting crawling and enforcing licensing conditions through private technical infrastructure are presented as the foundation for a new market in which AI companies pay for access and publishers—at least those with sufficient scale and bargaining power—are compensated. Pay-per-crawl schemes, proprietary signalling standards, and network-level enforcement mechanisms are not incidental features of this approach; they are its core instruments.

The cost of this approach is structural. It reframes openness as a market failure to be corrected by exclusion, rather than a public achievement to be preserved through stewardship. It favors scale and bargaining power, rewards actors who can enforce exclusion, and hardens governance through private infrastructure. While scarcity may produce deals in the short term, it risks normalizing the enclosure of the digital commons and accelerating concentration in the long run—leaving smaller publishers, cultural institutions, researchers, and the public interest structurally disadvantaged in an internet rebuilt around controlled access rather than shared abundance.

To be clear, neither of these scenarios is currently working in practice. Despite some high-profile licensing announcements—particularly in parts of the music and audiovisual sectors—the situation remains bleak, especially when it comes to information production and access to knowledge and cultural heritage.

From the perspective of anyone committed to open access to knowledge and culture, and to a healthy, diverse, and pluralistic information ecosystem, the first scenario is clearly preferable. As we have argued in Beyond AI and Copyright, it is also a precondition for democratic societies that do not leave the development and deployment of AI entirely to commercial actors. Such societies need public AI to flourish—and public AI, in turn, depends on a sustainable information ecosystem that is not driven by commercial imperatives alone.

Saving the Commons before it closes

Yet, as I write this, that scenario looks like a distant possibility at best. Setting up a public AI ecosystem and introducing levy- or tax-based redistribution at scale runs against the grain of the past three decades of market-centric policymaking. These obstacles are real, and the timelines are long.

At the same time, the need to respond to the changing environment is very urgent. Where political solutions appear distant, technical fixes promise immediacy. Interventions like gating access to previously public information, restricting crawler access and deploying pay-per-crawl schemes embedded in private enforcement stacks offer the prospect of quick relief—or at least of fending off resource-draining crawlers. This is a reality that even organizations with historically strong commitments to openness are confronting. Few observers would have predicted at the beginning of last year that by the end of 2025, Creative Commons, long seen as an ideological anchor of the open movement, would come to engage with—albeit cautiouslya pay-per-crawl standard spearheaded by publishers.

Closer to home, this has also manifested itself in our own work, which led us to propose a differentiated access model for sharing cultural heritage data. That model suggests charging AI companies for bulk access to collections even when the individual works are in the public domain—a pragmatic response to immediate pressures, but also a sign of how profoundly the environment has changed.

It is important not to conflate such pragmatic experiments undertaken by commons-based organizations that remain fundamentally committed to open access with the scarcity-driven vision advanced by Cloudflare and segments of the publishing industry. Initiatives such as differentiated access models developed by cultural heritage institutions, sustainability-oriented services like Wikimedia Enterprise, and experiments with gated access to training datasets should not be read as endorsements of an enclosure-based future for the internet. Rather, they are rational, often reluctant responses to immediate pressures: escalating operational costs, unreciprocated large-scale extraction by AI developers, and the absence—so far—of credible, system-level redistribution mechanisms.

These bottom-up approaches are best understood as efforts to shore up the commons under adverse conditions, not to weaponize access or to recreate artificial scarcity as a governing principle. They operate in contexts shaped by public-interest governance and are generally constrained by strong normative commitments to openness. In a world where deployment-level redistribution and public AI infrastructure were meaningfully in place, many of these constraints would likely become unnecessary—or would evolve into very different forms.

Seen from a distance, however, it should be clear that individual information producers and stewards cannot, on their own, successfully counter a small number of unprecedentedly powerful technology companies. For any response to succeed in preserving a diverse and sustainable information ecosystem, collective action is required—both bottom-up, through coordinated action by information producers, and top-down, through political will to enable redistribution via fiscal interventions. While the latter may seem implausible today, it is worth noting that some of the most prominent architects of the current technological transformation already regard it as inevitable.

We are in the midst of a profound realignment of the information ecosystem. If 2026 resembles 2025 in any meaningful way, this process will only accelerate. That should be taken as an encouragement to shape its direction while we still can. Let 2026 be the year in which we embrace collective responsibility for a sustainable information ecosystem—by supporting the abundance and diversity of the digital commons and building public-interest alternatives to scarcity-based control. The alternative is a new age of artificial information scarcity, one that will predictably lead to further concentration of information production and deepen inequalities in access to knowledge, with corrosive effects on democratic societies.

Authors

Paul Keller
Paul Keller is Director of Policy at Open Future with 20+years of experience as a media activist, open policy advocate, and systems architect. A political scientist by training, he deeply understands the political, social, and legal implications of digital transformation. Paul is a research fellow a...

Related

Podcast
The Open Internet is Dead. What Comes Next?October 12, 2025
Perspective
Cloudflare’s Troubling Shift From Guardian to GatekeeperJuly 9, 2025

Topics