Home

Donate
Perspective

The Overlooked Climate Risks of Artificial Intelligence

Felippa Amanta, Charlie Wilson / Jul 30, 2025

Catherine Breslin & Tania Duarte / Better Images of AI / AI silicon clouds collage / CC-BY 4.0

Artificial Intelligence (AI) is rapidly diffusing into every sector of the economy and daily life. Its proliferation is often framed within the prevailing narrative of ‘AI for good,’ including the promise of AI as a tool to address global challenges such as climate change. Yet this optimistic framing overlooks a growing and underexamined reality: the extent to which AI itself can contribute to climate risks.

Current concerns about AI’s adverse climate impacts are largely confined to operational energy and water use by data centers. While these impacts are important, they represent only a fraction of the ways in which AI can influence climate outcomes. AI systems will reshape behavior, affect infrastructure, and alter economic and political dynamics in ways that may work against decarbonization efforts. The risks extend from individual choices and system-level efficiencies to public trust in technologies and climate governance.

Recognizing and identifying these varied risk pathways is essential if we are to ensure AI supports—rather than undermines—climate action. Drawing on a broad taxonomy of AI-related risks, our recent analysis has mapped dozens of potential linkages between AI and climate vulnerabilities, capturing key dimensions of AI-related risks including misinformation, discrimination, privacy and security, malicious use, human-computer interaction failures, and broader socioeconomic harms. These include links between AI risks and the building blocks of net-zero energy pathways, including: 1) sectoral energy demand and electricity supply networks; 2) low-carbon technology deployment and behavioural shifts; 3) climate policy; and 4) climate governance institutions (see full map here). While these links are still emerging, they highlight the urgent need to account for AI’s indirect and systemic impacts on climate goals.

This article outlines several key areas where AI contributes to climate risk—directly, indirectly, or through unintended consequences. Understanding these systemic risk linkages is essential to align AI development with climate change mitigation goals, or those goals will be pushed out of reach.

Direct energy impacts and emissions

AI-driven efficiency gains, while potentially reducing resource use, often trigger rebound effects. These occur when improved efficiency lowers costs, which in turn spur increased consumption, ultimately offsetting any environmental benefits. Efficiency and productivity gains from many different AI applications that reduce frictions or transaction costs can lead to a surge in energy-hungry activity. This is evident in contexts from e-commerce and advertising to freight logistics and buildings’ energy performance, as well as AI data centers themselves.

Moreover, AI-powered platforms frequently shape consumer behavior through automated nudging, prioritizing engagement, the interests of AI developers and deployers, over sustainability. For instance, ChatGPT automatically provides follow-up suggestions in chats that prompt people to continue using it for more queries or more image generation, stimulating rather than managing demand. Such systemic incentivization of energy-intensive behaviour is at odds with climate objectives.

Another risk is the growing reliance on agentic AI, capable of making decisions on behalf of users. In contexts like travel, healthcare, and finance, such systems risk misalignment with users’ values. For example, an AI agent tasked with booking your next travel destination might optimize for comfort or price, rather than low-carbon options.

Cybersecurity threats to low-carbon technologies

AI significantly escalates cybersecurity risks. Researchers widely anticipate an increase in AI-based hacking data breaches from automated propagation and attack capability. This presents an obstacle to the deployment of smart, networked low-carbon technologies, including electric vehicles, smart building systems, and grid-responsive technologies such as heat pumps, that need to be adopted at scale and where perceived security risks could seriously delay progress.

Furthermore, the integration of AI into increasingly decentralized and digital systems introduces vulnerabilities that could trigger cascading failures across critical infrastructure, including energy networks, industrial sectors, and agriculture. These systemic risks are often glossed over by the hype surrounding AI as a sustainability solution in the major carbon-emitting sectors.

Climate misinformation and disinformation

The expansion of generative AI has exacerbated challenges associated with online misinformation and disinformation. AI tools can produce biased, misleading, or false responses to questions about climate change, including greenwashing and other information that misrepresents fossil fuel companies’ role in the climate crisis. In the wrong hands, AI can be misused to create more convincing, personalized deepfakes on climate at a cheaper and faster rate. AI can also widen the dissemination of climate misinformation by creating bots to promote certain content.

Climate mis- and disinformation may also promote climate scepticism, dissuade people from adopting low-carbon choices and behaviours, or worse, convince people to reject climate action altogether. A climate denial think tank managed to do just that by creating and spreading an image of a dead whale in front of wind turbines, claiming that offshore wind was responsible. Social media campaigns of this nature risk creating a feedback loop between scientific denialism and political inaction.

Socioeconomic disruption and indirect climate harms

As a general-purpose technology, AI’s societal impacts are transformative and diffuse, affecting employment, income distribution, and social cohesion. A great deal of research has been focused on AI’s impact on jobs, skills, wages, and widening inequalities, as well as the risks for discrimination, surveillance, and civil liberties. While such issues are widely recognized, including in legislation like the EU AI Act and governance discussions focused on AI red lines, their indirect implications for climate action remain underexplored.

AI-induced socioeconomic harms—such as inequality, reduced autonomy, and increased surveillance—can indirectly undermine climate efforts. These effects can reduce an individual’s agency and capabilities to act on climate and diminish civic engagement more broadly. At a higher level, they contribute to public distrust, weaken institutional legitimacy, and sap the social consensus and collective action necessary to pursue shared climate goals that protect the global commons.

A call for climate-responsive AI governance

These examples outlined above are only a subset of a broader landscape of climate-related AI risks, documented in our larger database. As AI becomes more embedded in daily life and critical infrastructure, so too does its influence on systems, behaviours, and policies that shape our climate outcomes. The impacts of AI on climate are dense, complex, and diffuse, extending way beyond its energy use in data centers.

Recognizing these systemic linkages is crucial if we want to align AI development with climate goals. It is also the first step towards developing a more robust regulatory and governance framework with appropriate risk mitigation strategies. A first step is to identify who should take on which risks, and then identify how they should be tackled. AI and climate are both global collective problems that require collective actions; only when we acknowledge the extent of the impacts of AI on the climate can we start addressing them.

Authors

Felippa Amanta
Felippa Amanta is a PhD researcher in the Environmental Change Institute, University of Oxford and an Associate Researcher in the Center for Indonesian Policy Studies. Felippa’s research is on the social and environmental impacts of digitalisation and the role of policy. Prior to her PhD, Felippa ha...
Charlie Wilson
Charlie is a Professor of Energy and Climate Change and Senior Research Fellow in the Environmental Change Institute (ECI) at the University of Oxford. He is also a visiting research scholar at the International Institute for Applied Systems Analysis (IIASA) in Austria. Charlie’s research is at the ...

Related

Measuring AI’s Environmental Impacts Requires Empirical Research and StandardsFebruary 12, 2024
Perspective
Why Climate Disinformation Thrives Online and How to Fight It at ScaleJuly 18, 2025

Topics