EU's AI Act: Tread the Guidelines Lightly
Kris Shrishak / Feb 21, 2025
Office of European Commission in Brussels. Shutterstock
Europe’s AI Act was published in June 2024. But it will be years before the interpretive guidance from the Court of Justice of the European Union (CJEU) arrives. In the meantime, the European Commission is required to publish guidelines to help developers, deployers, or anyone else who cares about this law. Although not legally binding, these guidelines should be considered by EU countries when they enforce the law and impose penalties on infringement by AI developers and deployers. This, however, does not prevent the regulators and fundamental rights bodies in EU countries from interpreting the law differently than the Commission.
The first five articles of the AI Act, which include prohibitions, have been in application since February 2, 2025. It is in this context that we should read the two guidelines published by the European Commission in early February, but later than the date of application of the relevant parts of the AI Act. The guidelines cover the definition of an AI system and the prohibitions covered by the AI Act. Both guidelines attempt to improve legal clarity. But do they?
Definition of an AI system
The guidelines on the definition excel at one thing — stating once and for all that spreadsheets are not AI systems. Beyond that, the document feeds more confusion than clarity.
There are seven main elements to definition of an AI system: “(1) a machine-based system; (2) that is designed to operate with varying levels of autonomy; (3) that may exhibit adaptiveness after deployment; (4) and that, for explicit or implicit objectives; (5) infers, from the input it receives, how to generate outputs (6) such as predictions, content, recommendations, or decisions (7) that can influence physical or virtual environment”
Not all of these components are equally important. While “varying levels of autonomy” are necessary, adaptiveness is optional. “All systems that are designed to operate with some reasonable degree of independence of actions fulfil the condition of autonomy in the definition of an AI system.” The guidelines neither shed light on whether “varying levels” include zero autonomy nor the hierarchy of importance of the seven elements.
However, Recital 12 of the AI Act states that the “capability to infer” is a key characteristic of AI systems. This capability “transcends basic data processing by enabling learning, reasoning or modelling.” Contradictorily, the guidelines leave “mathematical optimization” capable of inference out of scope because “they do not transcend ‘basic data processing’.”
Furthermore, “linear or logistic regression” is provided as an example of a method used for “mathematical optimization," which the guidelines state is out of scope. However, the guidelines distinguish between “optimising the functioning of the systems” and “adjustments of their decision-making models.” It is only the former that is out of scope.
Applications in high-risk areas, such as evaluating “the eligibility of natural persons for essential public assistance benefits and services,” involve decision-making about people. Using linear or logistic regression in such applications would involve making “adjustments of their decision-making models." In that context, logistic regression is within the scope of the AI Act.
In addition to using the US English spelling "optimization" – only once in a while for variety – the guidelines provide imaginative reasons to explain why certain systems may not be AI systems. These reasons include usage “in a consolidated manner for many years” and “their performance." You can also find circular reasoning: “standard spreadsheet software applications which do not incorporate AI-enabled functionalities” are not AI systems; Perhaps the most baffling one: “[P]hysics-based systems … [that] use machine learning techniques to improve computational performance” are also out of scope. It is unclear if “physics-based systems” refers to systems that rely on the laws of physics. Computing is bound by the laws of physics.
Prohibitions
The guidelines on prohibitions are an improvement in both length and quality — albeit marginally. They rightly state that a case-by-case analysis is necessary to assess whether a specific AI system is prohibited. Nevertheless, these guidelines do not offer examples of such an assessment, but provide examples that are “merely indicative."
The AI Act includes eight prohibitions:
- Manipulation and deception
- Exploitation of vulnerabilities
- Social scoring
- Individual criminal offense risk assessment and prediction
- Untargeted scraping to develop facial recognition databases
- Emotion recognition in workplace and education institutions
- Biometric categorisation
- Real-time remote biometric identification (RBI)
The prohibition on real-time RBI applies to the deployers. All others apply to providers, deployers, and operators (who could take up both roles). These prohibitions cover the ‘use’ of an AI system, which is not limited to the intended use but includes misuses, regardless of whether they are reasonably foreseeable.
There are very few places where the guidelines are direct and clear. For example, “activities of Europol and other Union security agencies, such as Frontex, fall within the scope of the AI Act.” Human rights groups have previously raised concerns that the national security exemption might be used by law enforcement agencies to bypass the prohibitions. On this critical issue, the guidelines do not provide any additional guidance beyond existing case law from the Court of Justice.
Another example is the clarification that the prohibition on untargeted scraping to develop a facial recognition database does not require “the sole purpose of the database … to be used for facial recognition; it is sufficient that the database can be used for facial recognition.” Untargeted scraping of facial images, however, is already unlawful under the GDPR, EUDPR, and the LED. Enforcement, especially extraterritorial, has been the problem.
Don't manipulate, deceive and exploit
The AI Act prohibits AI systems that are manipulative, deceptive or exploit vulnerabilities of people to cause significant harm. This prohibition applies even if humans do not intend to design or deploy the AI system to cause significant harm. The effect matters.
The prohibition can include an “AI system that creates and tailors highly persuasive messages based on an individual’s personal data or exploits other individual vulnerabilities influences their behaviour or choices to a point of creating significant harm.” This could include advertising that targets “people who live in low-income post-codes and are in a dire financial situation … and causing them significant financial harm.”
“AI system using personalised recommendations based on transparent algorithms and user preferences and controls engages in persuasion [is not prohibited].” The guidelines indicate that opaque recommender systems could be within the scope of the prohibitions.
Although the rules for general purpose AI (GPAI) apply only from August 2, 2025, the prohibitions already apply to GPAI when they fall within the scope of the prohibitions. For example, a GPAI deployed in the form of a chatbot that is manipulative or deceptive and causes significant harm would be a prohibited AI system.
The prohibition on social scoring that is detrimental to people applies in public and private contexts regardless of the sector or field of application. In contrast, the prohibition on “individual criminal offence risk assessment and prediction” only applies when it is “based solely on the profiling of a natural person or on assessing their personality traits and characteristics." This means predictive policing based on location is not prohibited. So patrols could be heavily deployed in areas decided by predictive algorithms “based on historical data and perpetuate discrimination and inequities in law enforcement."
Biometrics
The prohibition on emotion recognition only applies in workplaces and educational institutions. The notion of a workplace includes the recruitment process and protects self-employed people. However, the guidelines make for a displeasing reading when they state that “[u]sing webcams and voice recognition systems by the call centre to track their customers' emotions, such as anger or impatience, is not prohibited” and “by a supermarket … to conclude that somebody is about to commit a robbery, is not prohibited under Article 5(1)(f) AI Act.” These examples, of course, do not mean that these are lawful practices. For instance, processing of customers’ biometric data in these contexts without a valid legal basis would be unlawful under data protection law.
The prohibition on real-time remote biometric identification (real-time RBI) in public spaces applies for law enforcement purposes, even if deployed by another entity such as a public transport company or a sports club – if a law enforcement authority delegated the deployment to them. In contrast to the ban on emotion recognition, this prohibition has an exceptional allow list, which is not a legal basis to use real-time RBI for “the targeted search of victims of three specific serious crimes and missing person," “the prevention of imminent threats to life or physical safety or a genuine threat of terrorist attacks," and “identification of suspects and offenders of certain serious crimes.”
In these exceptional cases, real-time RBI can be used only if
- A national law authorizing one or more of these cases is adopted;
- A fundamental rights impact assessment (FRIA) to assess necessity and proportionality has been performed by the law enforcement agency (LEA);
- For each use, a LEA (or an entity on their behalf) wants to use real-time RBI, the LEA needs to get an authorization from a judicial or an independent administrative authority whose decision is binding (except in the case of emergency when post-hoc approval is required) to notify the market surveillance authority (MSA) and the data protection authority (DPA); to add the use information to the non-public EU database.
- DPA and MSA submit an annual report to the European Commission noting the frequency etc of the use of real-time RBI in their country.
As no EU country has yet to adopt a national law authorizing the use of real-time RBI, currently, there is a blanket prohibition on the use of real-time RBI across the EU. Ireland and Denmark are exempted from law enforcement-related provisions of the AI Act by Protocol 21-22 annexed to the EU treaties. They can make their own rules.
Conclusion
Let's not forget that, beyond the AI Act, AI systems may be prohibited by other laws, such as anti-discrimination and GDPR (e.g., due to the lack of legal basis for processing personal data for AI training).
The guidelines on prohibitions, despite referring to the various fundamental rights at risk and to the EU Charter, fail to indicate the role of the fundamental rights bodies, except data protection authorities, designated under Article 77. This is surprising since the prohibitions are well within their remit, and whose expertise will be essential for the market surveillance authorities.
The best thing about the guidelines on the prohibitions – they can be updated. The best thing about the guidelines on the definition of an AI system – they are not legally binding.
Authors
