On 4 February 2024, the European Commission published guidelines on prohibited practices under the EU’s AI Act (Guidelines). While non-binding, the Guidelines offer valuable insight into how the Commission interprets the prohibitions set out in Article 5 of the AI Act. They also outline how the Commission will coordinate EU-level enforcement and aim to support both businesses and national regulators in understanding compliance obligations.
The Commission clarifies the scope of the different practices prohibited under the AI Act, making clear that the application of Article 5 requires a case-by-case assessment. As a result, companies developing or using AI systems should be vigilant to ensure they are not engaged in a prohibited practice. The prohibitions discussed in this blogpost apply to both providers and deployers of AI systems.
This blogpost builds on our earlier blogpost #24 on the Guidelines. Here we highlight key takeaways from three specific prohibitions under Article 5:
- predictive policing;
- untargeted scraping of facial images; and
- emotion recognition in workplace and education institutions.
Prohibition of predictive policing
Article 5(1)(d) of the AI Act prohibits the use of AI systems to assess or predict criminal behaviour solely on the basis of profiling or personality traits. The aim is to ensure individuals are judged on their actual behaviour, rather than on AI-generated predictions.
The Commission provides detailed guidance on the cumulative conditions that must be met for this prohibition to apply:
- Assessing the risk or predicting the likelihood of a person committing a crime: This covers AI systems generating risk scores based on historical data (such as police records), combined with indicators suggesting a likelihood of criminal behaviour.
- Solely based on profiling or the assessment of personality traits and characteristics: ‘Profiling’ is defined with reference to Article 4(4) of the GDPR as automated processing of personal data to evaluate personal aspects. ‘Personality traits and characteristics’ can include factors such as gender, race, ethnicity, address, income, health, personal preferences, behaviour or financial status.
The prohibition can also extend to private actors, including:
- Private entities entrusted by law to exercise public authority and powers for crime prevention, detection or prosecution, including those offering AI-driven crime analytics tools to law enforcement agencies.
- Private entities acting to fulfil legal obligations by assessing or predicting criminal risk, for example banks screening customers for money laundering activity.
However, the Commission clarifies that private actors profiling individuals as part of regular business operations and safety (i.e. any internal risk assessments in the context of ordinary course of business) with the aim to protect their own interests (such as detecting financial irregularities) – would generally fall outside the scope of this prohibition, even if such profiling incidentally reveals a risk of criminal behaviour.
What falls outside the prohibition of predictive policing
The Commission clarified several scenarios that fall outside the scope of the Article 5(1)(d) prohibition:
- AI systems that support human assessment based on objective and verifiable facts directly linked to a criminal activity are explicitly excluded. For example, the use of AI to profile suspicious behaviour in a crowd is permitted – provided the AI output is meaningfully reviewed by humans based on objective and verifiable facts linked to the potentially criminal behaviour.
- Additional exclusions highlighted by the Commission include:
- predictions based on crime locations or likelihood of crime in certain areas – unless location data is used to profile a person.
- predictions relating to legal entities, unless the predictions concern natural persons acting via a legal entity.
- predictions of administrative offences.
Prohibition of untargeted scraping of facial images
Under Article 5(1)(e) of the AI Act, AI systems used for the untargeted scraping of facial images from the internet or CCTV footage – for the purpose of creating or expanding facial recognition databases – are prohibited. This provision is intended to counter the risk of mass surveillance.
The Commission clarifies the conditions that must be fulfilled for the prohibition to apply:
- ‘Untargeted scraping’ of ‘facial images’: ‘Untargeted’ refers to data collection that does not focus on specific individuals or groups. ‘Scraping’ means automatically extracting content or information – typically using web crawlers, bots, or other automated tools – from multiple sources. The prohibition is limited to the scraping of human faces.
- From ‘the internet or CCTV footage’, including data that people have voluntarily shared on social media.
- To create or expand a ‘facial recognition databases’: ‘Database’ refers here to any collection of information organised for rapid search and retrieval by a computer, which enables comparing and matching human faces in digital images or video frames against existing facial profiles to identify likely matches.
What falls outside the prohibition of untargeted scraping of facial images
The Commission sets out examples of practices falling outside the scope of the prohibition:
- Targeted image searches: Using a picture to search for an individual’s face online is permitted, as this involved a targeted rather than untargeted approach.
- Facial image databases used solely for AI training or testing: The prohibition does not apply where databases are used for training or testing AI models – provided the individuals in the images are not identified.
- Existing facial recognition databases: Databases created before the prohibition enters into application are not subject to the restriction – unless they are subsequently expanded using AI-enabled untargeted scraping.
Prohibition of emotion recognition in workplace and education institutions
Article 5(1)(f) of the AI Act prohibits the use of AI systems designed to recognise emotions of natural persons in workplace and educational settings. The aim is primarily to protect against discriminatory and intrusive outcomes considering the imbalance of power in the context of work or education .
The Guidelines provide further clarity on how this prohibition should be interpreted:
- ‘Identification or inference of emotion’ (or intention): This includes AI systems that identify or infer emotional states such as anger, happiness, sadness, fear, surprise or disgust. Physical states that are not emotional, such as fatigue or pain, are excluded.
Though Article 5(1)(f) does not explicitly reference intentions, the Commission confirms that the prohibition also covers intention inference – bringing it line with broader AI Act provisions on emotion recognition systems (Guidelines, paragraph 245). - Use of ‘biometric data’: The prohibition only applies to AI systems using ‘biometric data’ as per the definition of emotion recognition systems in Article 3(39) of the AI Act. This includes inputs such as facial expressions, finger print (dactyloscopic) data, keystroke patterns (typing style) and body postures or movements.
- Use in ‘workplace and educational institutions environments’
- Workplace: This covers any setting in which an individual performs work – including offices, factories, retail shops, open-air sites, vehicles and remote workspaces. The prohibition also applies to candidates during the hiring process. However, emotion recognition systems used for personal training purposes (provided they have no effect on the employment relationship) or for interactions with customers are excluded from the scope.
- Educational institutions: This includes all public and private educational settings, such as schools, universities, vocational training, and continuous education programmes.
What falls outside the prohibition of emotion recognition in workplace and education institutions?
- As explicitly stated in Article 5(1)(f), the prohibition does not apply to systems used for:
- Medical reasons, such as therapeutic applications. However, systems designed to assess general well-being – such as detecting stress or burnout – are not covered by this exemption and therefore remain prohibited.
- Safety reasons, such as applications specifically relating to physical safety. This excludes broader interests such as property protection.
The Commission notes that the use of these exceptions should be accompanied by appropriate safeguards, including expert assessments where necessary.
Nevertheless, Member States are allowed to adopt stricter rules to protect workers. For example, national laws may impose a blanket ban on the use of emotion recognition for medical purposes in the workplace (see Article 2 (11) of the AI Act).
Conclusion
Understanding the nuances of these prohibitions is crucial for companies developing or deploying AI systems as they seek to comply with the EU’s evolving regulatory landscape. The Guidelines offer helpful clarifications and practical examples to support businesses and regulators in navigating these complex requirements.
What’s next?
In our next blogpost ‘European Commission releases critical AI Act implementation Guidelines – Prohibited AI Practices (Part 4)’ we will delve deeper into remaining prohibitions covered in the Guidelines:
- Biometric categorisation for certain ‘sensitive’ characteristics; and
- Real-time biometric identification systems used for law enforcement purposes.
We will also examine the Commission’s guidance on how these prohibitions interact with other areas of EU law, as well as the details provided on when the prohibitions will begin to apply.