Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice
Laura Lazaro Cabrera , Laura Caroli, David Evan Harris / Mar 28, 2025
A photo at the European Commission in Brussels. Shutterstock
The EU AI Act, which came into force on August 1, 2024, initiated a “co-regulatory” process involving a working group of close to 1,000 stakeholders from AI companies, academia, and civil society organizations. This working group is now in the final stages of drafting the General Purpose AI Code of Practice, effectively a detailed instruction manual for how AI developers can comply with key portions of the AI Act setting out rules for models. Developers who follow the manual are afforded a “presumption of compliance” with the Act, though if they want, they can choose to comply in their own ways.
Two of us (Laura Lazaro Cabrera and David Evan Harris) are members of this massive working group, and one of us (Laura Caroli) helped write the EU AI Act as a European Parliament staffer. We are writing here together because we are gravely concerned that the penultimate draft of the Code of Practice is failing to protect human rights. This draft relies on faulty logic that dramatically limits the ways in which AI developers would need to mitigate human rights risks from their AI models.
The AI Act, and consequently the Code, distinguish between “general-purpose AI models” and “general-purpose AI models with systemic risk.” Models falling in the latter category — based on a training compute threshold set by the AI Act or upon designation by the Commission based on a range of criteria — are subject to additional obligations, including conducting risk assessments and then mitigating identified risks. One of the Code’s key functions is to specify the types of risks that providers of these models must assess.
From the first draft, the Code included a two-tier approach distinguishing between risk categories. However, in the current draft, the second risk category went from simply “additional” to “optional.” At a workshop for civil society participants on the Code of Practice drafting process held this Monday under the Chatham House Rule, one of the participants described the need to reduce requirements on AI companies, so that Europe wouldn’t miss out on AI innovations. In their commentary on the latest draft (see page 8), they make clear how this is happening:
Summary of changes from the second draft: In Measure II.1.2, we have made systemic risk tiers mandatory only for selected systemic risks (Appendix 1.1) but optional for other systemic risks....
The list of optional risks is shockingly broad:
- Risks to public health, safety, or public security, e.g., risk to critical sectors; risk of major accidents; risk to critical infrastructure.
- Risks to fundamental rights, e.g., risk to freedom of expression; risk to non-discrimination; risk to privacy and the protection of personal data; risk from child sexual abuse material (CSAM) and non-consensual intimate images (NCII).
- Risks to society as a whole, e.g., risk to the environment; risk to non-human welfare; risk to financial system stability; risk to democratic processes; risk from illegal, violent, hateful, radicalising, or false content.
Notably, discrimination moved from the compulsory risk list to the optional list in this draft as well. Currently, only four risks require compulsory assessment under the Code: “chemical, biological, radiological and nuclear (CBRN),” “cyber offence,” “loss of control,” and “harmful manipulation.”
The argument that the drafters seem to take is that human rights risks are not among the main “systemic risks” stemming from “high-impact capabilities” of powerful GPAI models, which are defined as capabilities that “match or exceed the capabilities recorded in the most advanced general-purpose AI models.”
However, as a public letter from the lead negotiators of the AI Act to European Commission Vice President and Commissioner Henna Virkkunen clearly explains, and a joint civil society letter emphasises, these risks don’t have to only be related to the high-impact capabilities of a model, but can result simply from the large-scale adoption of AI models. Moreover, such impacts are not theoretical or long-term existential risks. They are already happening today. The fact that such a broad set of human rights risks could be downgraded to optional status sends a clear signal that this process is buckling under pressure from corporate interests.
There is a broad scientific consensus that discrimination is a known issue plaguing AI models, ranging from regressive gender stereotypes to outright racism. Unless actively assessed and mitigated, discrimination by AI models is near-inevitable because it is ingrained in the model’s own learning materials, owing to the biases so frequently present in the vast training datasets. As models grow larger and more sophisticated, the risk of discrimination is far from disappearing — instead, emerging research shows that advanced AI models still discriminate but do so covertly.
It is similarly common knowledge that illegal child sexual abuse material (CSAM) is present in the data used to train some popular AI models, which makes it possible for those models to replicate harmful content at scale. These models have also been used to create “undressing” or “nudifying” applications that produce non-consensual intimate imagery (NCII), of both children and adults. The well-known LAION-5B dataset case is an illustrative example, yet identifying CSAM in that dataset was only possible because it was open and allowed external researchers to query it. Open source datasets remain the exception rather than the norm (even with models that are advertised as “open source”). Because of this, model providers are in the privileged — and often unique — position to monitor and clean their datasets. Currently, this responsibility is not acknowledged by the draft Code of Practice.
Finally, AI models' reliance on publicly scraped content, rich in personal data, poses significant privacy challenges, not least because personal data can be regurgitated and extracted. A growing number of cases show that models’ retention of personal data can result in inaccurate, defamatory information being generated about real individuals.
Neglecting these well-recognized risks can have real, unintended consequences for the EU.
DeepSeek’s R1 AI model has made waves in the recent months for showing the world how small models, trained with a fraction of the computing power and cost of the most advanced models currently on the market (GPT, Gemini etc), and released under an open-weight license, can be just as powerful as some of the most advanced models available. Based on statements from its makers, R1 would not meet the compute threshold specified under the AI Act to automatically classify a model as posing systemic risk. However, R1 could still be designated by the EU AI Office as a systemic risk model by using a combination of criteria listed in Annex XIII to the AI Act (such as the number of business users and the number of end-users).
Deepseek has already shown several vulnerabilities and risks, such as privacy risks (already leading several European data protection authorities to launch investigations connected to both the DeepSeek app and the R1 model behind it) and, according to red-teaming research, risks of harmful bias and harmful content, in addition to widely documented disinformation risks. Under the current version of the Code, if R1 were to be designated as a model with systemic risk and DeepSeek were to commit to the Code of Practice, the company could very well avoid adjusting the emerging model to incorporate meaningful human rights risk mitigation measures and simply try to escape privacy noncompliance by taking superficial measures. As for the other mentioned risks, it would not even have to address them at all.
International approaches
It is also important to note that the current version of the Code's approach not only wrongly interprets the AI Act but also contradicts existing international efforts and emerging frameworks around AI safety.
The Hiroshima Code of Conduct for Advanced AI Systems — which includes GPAI models within its scope — explicitly requires providers to assess and mitigate risks to privacy, as well as harmful bias and discrimination risks. In fact, Recital 110 of the EU AI Act, explaining the notion of systemic risk, is intentionally and almost entirely taken from Action 1 of the Hiroshima Code. In this, the Hiroshima Code lists systemic risks as something that “organizations commit to devote attention to” and among them mentions “societal risks, as well as risks to individuals and communities such as the ways in which advanced AI systems or models can give rise to harmful bias and discrimination or lead to violation of applicable legal frameworks, including on privacy and data protection,” and “threats to democratic values and human rights, including the facilitation of disinformation or harming privacy."
Expert international consensus continues to evolve in this direction, with the latest International AI Safety Report, edited by Yoshua Bengio — also a Co-Chair of the Code of Practice process — similarly highlighting a wide array of risks specifically pertaining to GPAI models, ranging from manipulation and bias to privacy and the environment, among many others.
Multilateral efforts have fully endorsed this approach for years. In its pioneering AI Principles, the Organization for Economic Cooperation and Development (OECD), a membership-based international organization with 38 member countries developing policy solutions to global challenges, requires actors to implement safeguards in connection with human rights. Many governments have agreed to adhere to the OECD AI principles, including 22 EU member states individually and the European Union as a whole. Several other global AI summits have taken a similar rights-protective approach, with the Bletchley Declaration and the French AI Action Summit declaration highlighting the importance of protecting human rights.
Conclusion
The Code of Practice draft in its current form takes a step backward due to its weak and selective approach to human rights protections — a status that is also at odds with the AI Act’s ambitions to set a world-leading standard for AI regulation, built upon international approaches and offering robust protections for human rights.
The drafters of the Code of Practice should take decisive action to make sure that GPAI model developers take into account risks to human rights that their models present. If the EU fails to deliver on the promise of the AI Act as a global governance model that puts humanity above corporate interests, aftershocks will be felt around the world.
Owen Doyle, AI Policy Research Analyst at Harris Research Group, contributed research support for this article.
Authors


