Home

Donate

Jevons Paradox Makes Regulating AI Sustainability Imperative

Robert Diab / Mar 11, 2025

Hanna Barakat & Archival Images of AI + AIxDESIGN / Better Images of AI / Frontier Models 2 / CC-BY 4.0

In January, the Chinese firm DeepSeek unveiled an AI model that rivals the top US frontier models – but was developed at a fraction of the cost. US tech stocks hit a speed bump. Microsoft CEO Satya Nadella tried to put a positive spin on it by tweeting: “Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of.”

Willian Jevons, an English economist who wrote in the 1860s about coal supply, predicted that as techniques improved and produced more power from a given amount of coal, we would use more coal, not less. More efficient use of coal would make it profitable in more cases, causing demand to “skyrocket.”

While Jevons paradox might be good news for AI providers like Microsoft, it’s not good for the environment. It upends a common assumption that as AI technology improves – more efficient GPUs and data centers, smaller and better language models like DeepSeek – AI will have a smaller imprint. No, says Jevons. More efficient, cheaper AI will cause demand to explode and energy consumption to soar.

Yet this isn’t the only bad news of late on the sustainability front. Global efforts to regulate AI’s impact on the environment have faced stiff opposition from important players in the field in recent weeks. At the Paris AI Action Summit in February, the United States and the United Kingdom refused to sign even a non-binding statement of principles on “inclusive and sustainable AI,” with US Vice President JD Vance disparaging the idea of regulation altogether.

One might look elsewhere for signs of hope on AI sustainability. The United Nations High-level Advisory Body on AI has proposed creating an “international scientific panel on AI” that could oversee the development of standards for measuring and reporting energy use. The OECD has proposed something similar. At the Paris Summit, a group of countries and 37 AI companies formed a “Coalition for Sustainable AI,” which also seeks to develop standards for measuring and reporting on energy. It also commits to sharing research on more sustainable, efficient AI products and services. An even broader coalition of some 130 civil society groups formed a similar pact a few weeks earlier, setting out more ambitious goals.

But the global AI community seems stuck at the impasse of only making voluntary commitments – if that.

A new paper by Alexandra Sasha Luccioni, Emma Strubbel, and Kate Crawford shows why a voluntary approach likely won’t be effective here. Jevons paradox and other “rebound effects” they trace suggest that much of the environmental impact of AI remains opaque or often escapes a company’s efforts at self-reporting.

The paper supports a case for passing law that compels companies to be more transparent about the wider impact of their supply chain, energy consumption, and e-waste – law that would encompass the total life-cycle of AI systems and introduce meaningful incentives to being sustainable rather than trapped in a zero-sum race for growth.

AI’s environmental impact is real and growing

Noting the extent of current consumption and where trends are pointing is key to understanding the nature and extent of the problem. The authors’ overview of data on AI’s wide ranging environmental impact offers a helpful current snapshot of where we’re at. It also reveals how AI’s impact goes far beyond measuring a company’s water or electricity draw.

But to begin there, as the authors note, data centers handling much of the AI training and computation are at 2% of global electricity consumption, according to a 2024 International Energy Agency report, and set to double in 2026 – surpassing all of Canada’s power usage, a nation of 40 million people. As the IEA points out, much of this energy is generated from non-renewables. Centers also require enormous amounts of fresh water for cooling and cause much of it to evaporate or be filtered before being reused. Google and Microsoft are drawing 20% and 34% more water, respectively, in recent years, with no sign of slowing down – and this is at a time when 50% of the world’s population is moving toward water scarcity.

Companies are also reporting an increase in greenhouse gas emissions. Google reports a 48% rise in emissions since 2019, Baidu reports a 32.6% increase since 2021, and Microsoft reports a 21% increase since 2020. Some companies are seeking to pivot to nuclear energy, with Microsoft recently signing a deal to revive reactors at Three Mile Island. But nuclear power raises a host of additional concerns, including water consumption for cooling and nuclear waste.

A host of other issues arise in relation to mineral extraction when building AI hardware and consumer devices that interface with AI. Tungsten, lithium, germanium, and other rare minerals are mined in ways that inflict enormous environmental damage or contribute to conflict and war. Yet, as Luccioni and her co-authors note, many details about environmental concerns behind the supply of materials to companies like TSMC and NVIDIA remain opaque, such as what is being done to reduce or address radiation, toxic waste, or drought left in the wake of mining efforts these companies rely upon. More broadly, the constant turnover of gear – new servers, GPUs, phones – contributes to massive amounts of e-waste, which is now, as the authors note, “the fastest growing segment of solid waste worldwide, reaching 62 million tonnes in 2022.”

Company reports won’t tally the larger impact of AI’s ever-growing water and electricity draw on a nation’s grid or water supply. Details around mineral extraction and e-waste also remain mostly opaque – how toxins like mercury, arsenic, and lead are left to poison local environments. Voluntary reporting by AI companies is apt to leave much of this out of the story.

Rebound effects call for a holistic approach

For Luccioni, Strubbel, and Crawford, the true picture of AI’s impact includes not only these wider but opaque details around supply chains and disposal efforts but also what they call the “rebound effects” of the wider embrace of AI.

They break down these effects into three themes. AI has “material rebound effects” by changing how products are made and distributed. Streaming film and television, for example, render discs and VHS tapes obsolete but bring about a whole new infrastructure for producing and delivering content, which we seldom compare in terms of environmental impact. Similarly, while AI often involves “scaling” techniques, such as the use of parallel processing in computing, leading to efficiencies in the use of those devices, it may result in a higher demand on energy grids and power supplies that often goes unnoticed. AI can also lead to shifts of a spatial nature as digitization leads to fewer stores or offices but larger warehouses, more transportation to deliver products, and more energy-hungry data centers.

“Economic rebound effects” are both direct and indirect. Data centers that process AI are becoming more efficient, but as noted earlier, they’re handling more traffic and drawing more energy. Indirectly, as new consumer devices offer more AI functionality, they spur more frequent upgrading, generating more mineral extraction and e-waste. We’re also seeing what the authors call “societal and behavioral rebound effects,” as in the case of AI’s use in targeted advertising. A striking case is Amazon’s reliance on product recommendation algorithms to generate a third of its annual sales – pointing to how AI has helped spur a rise in consumption in recent years that involves a range of impacts, from manufacturing to transportation to product disposal through obsolescence.

As the authors note, “these distributed impacts [of AI] remain notoriously difficult to track,” and companies “commonly disclose only a narrow range of environmental metrics.” We could encourage more thorough or comprehensive reporting methods, but AI producers are still driven by “market logics” – still incentivized to use ever greater amounts of energy and computation to keep creating bigger, more powerful models. (See, for example, Sam Altman on the demands of GPT-4.5).

The need for law that takes a holistic approach

If more thorough measurement and reporting alone won’t solve the problem, what will? Luccioni, Strubbel, and Crawford argue that “what is required is a more substantial reimagining of the relationship between AI technologies, business objectives, and ecological imperatives.” This might involve “public policy frameworks that penalize unsustainable practices and reward genuinely carbon-negative deployments of AI, and business models that do not hinge on perpetual growth, in order to ensure that increased AI efficiency does not simply spur more consumption.” We’re not told much more.

But at a minimum, this would involve “a far more transparent stance on all the environmental impacts of AI systems and tak[ing] accountability for the far-reaching impacts of the technologies that it develops and deploys.”

It may take a while to reimagine the relation of AI to business and the environment. In the short term, however, getting companies to be more transparent and accountable about wider-reaching environmental impacts is something they can't be left to do voluntarily.

We won’t make real progress on environmental sustainability in AI without binding obligations on companies to be more transparent about impact – and to do so further back in the supply chain and further out in the afterlife of products. As Kate Crawford noted elsewhere, voluntary efforts are helpful, but the problem demands a more robust set of tools. This might involve a combination of benchmarks on energy use and incentives to meet them.

But the key and mostly missing ingredient thus far is the force of law.

Authors

Robert Diab
Robert Diab is a Professor in the Faculty of Law at Thompson Rivers University in Kamloops, British Columbia. He writes on topics in law and technology, including internet governance, compelled decryption, and digital privacy. He is a co-author of Search and Seizure (Irwin, 2023) on Canada’s constit...

Related

Hope: The AI Act’s Approach to Address the Environmental Impact of AI

Topics