Measuring AI’s Environmental Impacts Requires Empirical Research and Standards
Tamara Kneese / Feb 12, 2024On February 1, Senator Edward Markey’s (D-MA) office— along with Senator Martin Heinrich (D-NM), Representative Anna Eshoo (D-CA), and Representative Don Beyer (D-VA)—introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill calls for the Environmental Protection Authority to direct a study on the environmental impacts of artificial intelligence (AI) across its lifecycle, from the mining of rare earth minerals and the manufacturing and disposal of chips, to the training and use of models. It also calls for the National Institute of Standards and Technology (NIST) to develop standards for measuring and reporting AI’s environmental impacts.
The organization I work for, Data & Society, endorsed the bill — an unusual move for us as an independent research nonprofit, and one we took because the legislation supports our goal of using empirical, sociotechnical research to inform policy decisions.
We know that climate change is already here and that we must swiftly reduce greenhouse gas emissions in order to prevent the worst climate outcomes. The United Nations Intergovernmental Panel on Climate Change’s 2023 report stated that emissions must be halved by 2030 to limit warming to 1.5 degrees Celsius. Already, 2023 was the warmest year on record. And climate change will only exacerbate existing inequalities; climate disasters disproportionately impact poorer countries in the Global South. Even before the recent AI boom, the information and communication technology industry was contributing to the problem, consisting of roughly 1.5 to 4% of worldwide emissions.
It is clear that the rush to deploy generative AI and other AI systems is increasing emissions at a time that the planet cannot afford it. A recent report by the International Energy Agency estimates that the growth of cryptocurrencies and AI will cause energy consumption in data centers to double by 2026, using as much energy per year as the entire country of Japan. Data centers strain the grid in places where energy infrastructure is already brittle, and the energy demands of AI are also reviving coal plants.
The race for more compute amid the AI boom has also further concentrated power among a handful of companies that control chip production and data centers. Some powerful technologists have called for cleaner energy resources, like nuclear fusion, to accommodate the growing demand for AI, and have sometimes suggested that such choices will lead to more renewable energy infrastructure (a claim also previously made by Bitcoin enthusiasts). But switching to renewable energy is not a silver bullet: the rest of the supply chain is still needed to sustain the growth of AI, including the mining and manufacturing needed to produce specialized AI chips and the water it uses. And even renewable energy sources are not infinite; they also rely upon other infrastructures, which can leave behind massive amounts of waste.
Immediate action is needed. But what kind? The answer depends on what the research tells us — on the results of empirical studies that measure and develop standards for reporting not just emissions, but other environmental and social impacts associated with artificial intelligence. Researchers at Hugging Face have measured the carbon and energy requirements associated with the deployment of AI, rather than only focusing on the requirements of AI training, as is common practice. Green AI advocates have also called for carbon aware computing that considers the technologist’s relationship to the grid; it involves choosing to train models at times of day when there is more renewable energy available. But this does not necessarily cut down on overall emissions if the demand for energy grows. It also fails to address the larger question of equity: is it fair for tech to dominate the world’s energy resources?
While there are efforts underway to make systems more energy efficient and to measure and report emissions, decarbonization is not enough. (Many tech companies focus on energy efficiency because it also translates into cost savings, and can be justified as a business investment.) Data centers also use up water: Researchers have shown how the carbon and water footprints of AI must be weighed against each other, and that companies’ decisions about AI can have disparate regional effects, including in places that are already experiencing drought. And there are the AI supply chain’s outsize impacts on labor to consider, including the workers hired to train AI in precarious conditions, often in the Global South. With so many complex, interconnected considerations and implications, it is crucial for technologists to work with standards bodies, regulators, advocates, and qualitative researchers to determine how best to measure, report, and mitigate the entire range of AI’s impacts on ecosystems, habitats, and communities.
Some prominent technologists claim that we can solve the climate crisis easily enough, perhaps even through technology, reasoning that climate change is therefore less of an existential threat than the hypothetical risks they prefer to ascribe to AI itself. But climate change is happening right now, and to all of us (though its effects will be disproportionately felt by those in the Global South, which for too many makes it easy to dismiss). AI’s contributions to climate change are its real existential risk. To elucidate the urgency and help us understand how to address it, we need meaningful sociotechnical research that reveals the full spectrum of AI’s impacts. That’s what the Artificial Intelligence Environmental Impacts Act would encourage and enable, and why it has our support.