Main navigation
Ecological alignment of Artificial Intelligence
The European Parliament’s amendments to the proposal for a Regulation on Artificial Intelligence (AI) may be defined as a socio-ecological turnaround compared to the European Commission’s existing draft. The parliamentary draft proposes a series of environmental and climate-related provisions which, in the Oeko-Institut’s view, are feasible and technically achievable. The Oeko-Institut has reviewed these proposals in a Policy Paper. In order to achieve the European Union’s environmental and digital policy goals, all the proposed rules should be fully enacted.
Duly considering environmental impacts
The debate about the ethical and social risks of Artificial Intelligence (AI) has hitherto focused on existential threats and risks such as job losses and discrimination. By contrast, the climate-related and environmental impacts of the new technologies have tended to go unnoticed. However, AI applications require vast amounts of energy for training and operation, and the use of resources and disposal of hardware may also have substantial environmental impacts. Scientists are now increasingly identifying the indirect effects as well, recognising that perhaps imperceptible misalignments within widely utilised everyday systems can combine to reinforce and accelerate climate change and environmental degradation. They describe how the use of AI systems can result in excessive nitrogen fertiliser use in agriculture, for example. Dynamics such as these may be almost impossible to reverse later on. The parliamentary draft therefore aims to ensure that these findings are taken into account: “As well as acknowledging the potential importance of AI systems for the sustainable transformation, Parliament recognises that they may pose a substantial risk to the environment and, indeed, to democracy and the rule of law. It’s an important and correct step which we hope will be reflected in the outcomes of the current trilogue on the AI Regulation,” says lawyer Dr Peter Gailhofer, a senior researcher and expert in data and algorithm regulation at the Oeko-Institut.
The parliamentary mandate for the negotiations on the AI Regulation also introduces obligations to address the direct environmental risks. Energy and resource consumption must be measured and logged, for example, and other foreseeable environmental risks must be assessed and mitigated. From an environmental standpoint, however, gaps still exist: the parliamentary proposal’s determination of which systems fall within the scope of the Regulation is not based on environmental criteria. “Environmental risks must now be assessed and mitigated, but in relation to systems that do not pose any particular ecological risks. By contrast, systems that are especially environmentally sensitive may remain outside the scope of application,” Peter Gailhofer explains.
Integrating a socio-ecological perspective
On a positive note, the parliamentary draft offers great flexibility for development, the analysis concludes. For example, it includes mechanisms that could broaden the Regulation’s scope of application even after its adoption, allowing for a response to new insights into environmental impacts. In order to ensure that the Regulation can continue to evolve and be aligned more effectively with sustainability requirements, for example, it should provide more channels for environmental scientists and civil society to make their voices heard, in Dr Gailhofer’s view.