This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 6 minute read

EU AI Act unpacked #28: European Commission releases critical AI Act implementation Guidelines (Part 4) - Prohibited AI Practices

On 4 February 2025, the European Commission (EC) published guidelines on prohibited practices under the EU’s AI Act (the Guidelines). The Guidelines provide the EC’s interpretation of Article 5 of the AI Act, including legal explanations and practical examples. While the Guidelines are non-binding, with ultimate authoritative interpretation reserved for the Court of Justice of the EU, they are indicative of how the EC – through the AI Office – intends to coordinate EU-level enforcement.

This blogpost is the last of our 4 part series covering the EC’s first set of implementation guidelines published on 4 February 2025. Part 1 covered the AI System definition, while Part 2 and Part 3 covered the first 6 prohibited AI practices. In this final part, we discuss the scope of the remaining prohibitions on (1) biometric categorisation for certain ‘sensitive’ characteristics, and (2) real-time remote biometric identification (RBI)systems for law enforcement purposes. Further, we take a look at the interplay between those prohibitions and other Union laws, and also when they become applicable.

You can find all episodes of our EU AI Act unpacked blog series by clicking here.

Prohibition of biometric categorisation for certain ‘sensitive’ characteristics

Article 5(1)(g) of the AI Act prohibits AI systems that categorise individuals based on biometric data to deduce or infer sensitive characteristics such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. 

According to the Guidelines, the prohibition applies only if the following cumulative conditions are met:

  • Biometric categorisation system: The AI system must systematically classify or group individuals into predefined categories based on their biometric data, without identifying them. This includes, for example, categorising individuals on social media according to political orientation by analysing facial features in uploaded photos to send them targeted political messages.
  • A categorisation at an individual level: The categorisation must target individual natural persons, not groups. This covers systems that can single out persons based on bodily or special features.
  • The processing of biometric data: ‘Biometric data’ is defined in Article 3(34) of the AI Act and includes data such as facial expressions, dactyloscopic data, keystroke patterns and body postures or movements. 
  • To deduce or infer sensitive characteristics: The AI system must be designed to determine a limited number of sensitive characteristics: race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This includes, for example, deducing an individual’s race based on their voice.

Which types of biometric categorisation systems are nevertheless allowed?

The Guidelines also clarify two exclusions from the prohibition:

  • Labelling or filtering of lawfully acquired biometric datasets: Article 5(1)(g) of the AI Act allows categorisation when strictly necessary to label or filter lawfully acquired biometric datasets. The Guidelines indicate this applies beyond law enforcement, for example, to categorise medical images by skin or eye colour to enhance diagnostic accuracy. This exclusion aims to enable uses that support demographic representation or prevent bias or discrimination.
  • Categorisation ancillary to a commercial service: Under Article 3(40) of the AI Act, an AI system is not deemed to perform biometric categorisation if it is strictly necessary for objective technical reasons and ancillary to a main commercial service. Examples include online marketplaces that categorise bodily features by allowing consumers to preview a product on them, as the categorisation would be ancillary to the principal service of selling the product.

The Guidelines stress that even where the prohibition does not apply, AI systems categorising individuals based on sensitive characteristics will generally qualify as high-risk and must meet all corresponding compliance obligations.

Prohibition of real-time RBI systems for law enforcement purposes

Finally, the Guidelines delineate the prohibition on real-time RBI systems under Article 5(1)(h) of the AI Act. This provision was among the most contentious during the legislative process and contributed significantly to delays in the AI Act’s adoption.

According to the Guidelines, the prohibition applies when the following cumulative conditions are met:

  • RBI system: The AI system must qualify as an RBI system, i.e. be capable of identifying individuals at distance based on physical or behavioural characteristics, without their active involvement, by comparison with a reference database. This excludes AI systems intended to be used for biometric verification or authentication, whose sole purpose is to verify that a specific person is who they claim to be.
  • ‘Use’ of the system: Unlike other prohibited AI systems, this applies solely to the deployment of a real-time RBI system. The development, sale, or placing on the market of such systems is not covered by the prohibition, as they may be lawfully used in the limited exceptions outlined below.
  • In real-time: The system must process biometric data instantaneously, near-instantaneously or in any event without any significant delay (to avoid the prohibition being circumvented through the retrospective use of RBI systems). While the notion of ‘significant delay’ is not defined in the AI Act and must be assessed on a case-by-case basis, this is generally the case when the person is likely to have left the place where the biometric data was captured before the RBI system processes it for identification. In such cases, the use of the RBI system would fall outside the scope of the prohibition (but would still qualify as high-risk).
  • In publicly accessible spaces: The use must occur in physical locations open to an undetermined number of persons, regardless of ownership. Temporary access restrictions, such as requiring tickets for entry, do not necessarily alter a location’s public accessibility status. Locations with restricted access, such as private offices, secured facilities, and closed institutional settings, are excluded from the prohibition, as are online spaces.
  • For law enforcement purposes: The system must be used for crime prevention, detection, investigation, or prosecution, as defined in Article 3(46) of the AI Act, by law enforcement authorities or on their behalf (e.g. public transport companies, sport federations or banks that are entrusted by law enforcement authorities to conduct certain action to counter certain crimes).

Which types of RBI systems are nevertheless allowed?

RBI systems meeting the conditions above are prohibited, except for three limited exceptions provided for in Article 5(1)(h)(i)-(iii) of the AI Act:

  • Targeted search for specific victims of serious crimes and missing persons: This concerns cases of abduction, trafficking, and sexual exploitation, as well as searches for missing individuals.
  • Prevention of imminent threats to life or terrorist attacks: This covers specific (not hypothetical), substantial, and imminent threats to the life or physical safety of individuals, including threats to critical infrastructure where disruption would cause serious harm to the population or to exceptional, genuine, present or foreseeable situations such as the prevention of terrorist attacks or other immediate public security risks.
  • Identification or localisation of suspects of serious crimes: This applies to offenses listed in Annex II of the AI Act, including terrorism, human trafficking, sexual exploitation of children, organised crime, and other serious offenses punishable by a custodial sentence or a detention order for a maximum period of at least four years under the relevant Member State law.

Mandatory safeguards with respect to (permitted) real-time RBI

The Guidelines set out mandatory safeguards and conditions governing permitted uses of real-time RBI under Articles 5(2) to (7) of the AI Act, aimed at mitigating risks to fundamental rights. 

Use must be strictly limited in time, geographic scope, and to the identification of specific suspects, victims, or offenders within a defined group. Generalised or indiscriminate scanning is prohibited. Each use requires prior authorisation from a judicial or independent administrative authority, excluding prosecutors. Authorities must also complete a Fundamental Rights Impact Assessment under Article 27 of the AI Act (see our previous blog post on the topic). RBI systems must be registered in the EU database under Article 49, with delayed registration permitted only in emergencies. Finally, decisions producing legal or similar effects may not be based solely on RBI output; human verification is always required.

Interplay between the prohibitions and other Union laws

The AI Act applies horizontally across all sectors and without prejudice to other Union legislation, such that, as made clear in the Guidelines,  it complements and does not override or restrict other existing Union laws.

Other relevant Union laws include the protection of fundamental rights, consumer protection, employment, product safety (e.g. the Medical Devices Regulation), data protection as well as platform regulation such as the DSA concerning the obligations for providers of intermediary services that embed AI systems or models into their services.

The interaction with data protection is particularly significant since most AI systems process personal data. This processing remains governed by, inter alia, the GDPR and the Law Enforcement Directive (LED). The Guidelines clarify that the AI Act applies as lex specialis to Article 10 of the LED regarding restrictions on biometric categorisation systems and real-time RBI systems used for law enforcement purposes.

It is anticipated that the EC will develop further guidance on the interaction between the AI Act and other legislation, including the GDPR, DSA, DMA, product safety rules, and EU copyright law. Discussions on this matter are understood to be occurring between the AI Office and members of the European Parliament's AI working group.

Timing and applicability

The prohibitions in Article 5 of the AI Act have applied since 2 February 2025, irrespective of when the AI system was placed on the market or put into service. As of 2 August 2025, the provisions on governance, enforcement, and penalties are also applicable. This means that although the prohibitions have been binding since February, authorities can now impose penalties for non-compliance.

Tags

ai, eu ai act, eu ai act series