Hope: The AI Act’s Approach to Address the Environmental Impact of AI
Zuzanna Warso, Kris Shrishak / May 21, 2024Hope. That’s the EU AI Act's position on the environmental impacts of AI. The hope that AI systems will help to achieve the goals of environmental sustainability. The hope that the negative environmental impacts of AI will be mitigated.
The EU AI Act aims to increase the uptake of “human-centered” and “trustworthy” AI systems. With mentions of fundamental rights, including environmental protection, sprinkled in the opening parts of the legislation, the intent of the lawmakers is clear: the development and use of AI should not adversely affect the EU’s obligation to protect the environment.
However, a deeper read of the legislation exposes the lawmakers’ belief in “AI for sustainability”—that the use of AI will result in environmentally beneficial outcomes—rather than the commitment to the intended “sustainability of AI.” Recital 4 lays bare this belief when it says that AI contributes to a wide range of environmental benefits “across the entire spectrum of industries and social activities.” We are told these benefits can be found in agriculture, biodiversity and ecosystem conservation and restoration, and climate change mitigation and adaptation.
This belief is also reflected in the Act, which allows AI systems to be deployed for “exceptional reasons of (..) environmental protection” without fulfilling the obligations of the Act if permitted by the regulator. The Act also allows the processing of personal data to develop AI systems within a regulatory sandbox, even when the personal data was collected for other purposes. Such AI systems must be developed to safeguard “substantial public interest,” which could include “energy sustainability” or “protection of biodiversity, protection against pollution, green transition measures, climate change mitigation, and adaptation measures.”
This reliance on hope (and hype) instead of evidence and facts means that the AI Act falls short of addressing the environmental impacts of AI. There is little evidence to support this belief that "AI for sustainability" will materialize, and even if it does, it will introduce new risks.
There is a growing body of evidence on the environmental costs of AI. The ambition of creating general (or multi-) purpose AI models comes at a steep cost to the environment, given the amount of energy these systems require. Multi-purpose, generative architectures are orders of magnitude more resource-intensive than task-specific systems. Tasks that generate new content: text generation, summarization, image captioning, and image generation are the most energy- and carbon-intensive.
But AI's environmental impact extends beyond energy-related greenhouse gas emissions: Building data centers involves mining environmentally harmful minerals like lithium, cobalt, gallium, and germanium; enormous amounts of water are used during the training and use of AI systems.
Nevertheless, the belief that AI can be good for the environment succeeds in diverting attention from the real question: Are the environmental and societal costs of developing and using AI acceptable?
The Act includes limited and inadequate measures to reduce the environmental impact of AI systems. Some of these measures are voluntary, while key details rely on the standardization process. This process requires the European Commission to issue a standardization request for deliverables related to reducing the “consumption of energy and of other resources during its lifecycle” of high-risk AI systems and “energy-efficient development of general-purpose AI models.” In a rush, the Commission had sent a standardization request to the European standardization bodies in May 2023, many months before the AI Act was finalized.
The scope of that request does not include standards related to the energy consumption of AI models and systems. The Commission would have to send an updated request “without undue delay.” But this cannot be a unilateral process. The Commission must consult the EU AI Board, advisory forum, and other relevant stakeholders. The exact timing for issuing the updated standardization request is still unknown. There is even greater uncertainty around the delivery date of these standards, but the Commission has given itself four years to assess the progress of the standard.
The Commission has another task—and only one more—in relation to the environmental impacts of AI. When assessing whether a general-purpose AI (GPAI) model poses a systemic risk, the Commission may consider the estimated energy consumption of training GPAI models. And that would be it.
In addition, AI companies also have obligations. A checklist of what AI companies must do is equally meager:
- Providers of GPAI models must document the model's energy consumption. When the model's energy consumption is unknown, an estimate based on the use of computational resources is sufficient.
- Providers of high-risk AI systems must account for direct or indirect harm to the environment and report them as a serious incident to the regulator.
Finally, voluntary codes of conduct might be drafted for “assessing and minimizing the impact of AI systems on environmental sustainability.” Voluntary initiatives can complement regulatory efforts but cannot substitute for legally binding obligations. Without binding regulation and regulatory oversight, there is insufficient incentive for AI developers and deployers to prioritize environmental concerns over short-term profits.
When evaluating how governments should regulate technologies with far-reaching consequences for people and the environment, it is prudent to proceed with caution rather than jumping on the hype bandwagon. Caution is especially warranted when there is growing evidence of environmental harm caused by AI technology and only speculative claims of its usefulness for society. EU legislators, despite recognizing AI’s adverse impact on the environment, have taken refuge in hope and hype instead of a precautionary approach.
Environmental protection should have been a driving force behind the EU's AI Act. Instead, it is treated as a side note.