On 4 February 2024, the European Commission published guidelines on prohibited practices under the EU’s AI Act (the Guidelines). These Guidelines provide the Commission’s interpretation of the prohibitions under Article 5 AI Act, including legal explanations and various practical examples to help companies understand and comply with the requirements outlined in the AI Act.
While the Guidelines are non-binding, with ultimate authoritative interpretations reserved for the Court of Justice of the EU, they reflect how the Commission – through the AI Office – will coordinate EU-level enforcement. They are designed to help both businesses and national regulators understand compliance requirements.
The Commission has adopted a notably broad interpretation of the different practices prohibited under the AI Act, making clear that the application of Article 5 requires a case-by-case assessment. As a result, companies developing or using AI system should be vigilant to verify they are not caught by a prohibition.
In this blogpost, we focus on few key clarifications provided by the Commission in relation to the ‘general concepts’ applicable to prohibited practices, and on the key takeaways from the Guidelines on the first three prohibitions under Article 5: (i) AI-enabled harmful subliminal techniques, manipulation and deception, (ii) exploitation of vulnerabilities, and (iii) social scoring.
General concepts applicable to Prohibited AI Practices
The Guidelines shed some light on several important general concepts that apply across prohibited AI practices:
- Material scope and key definitions. The Commission clarifies the concepts of ‘placing on the market,’ ‘putting into service’ or ‘using’ an AI system':
- The ‘placing on the market’ of an AI system covers any supply for distribution or use in commercial activity within the Union. Any means of supply are covered, including providing access online through APIs or other user interfaces, via cloud services, direct downloads, as physical copies, or embedded in physical products.
- The ‘putting into service’ of an AI system covers both supply for first use to third parties and in-house development and deployment in the Union.
- The ‘use’ of an AI system covers deployment at any moment of its lifecycle after being placed on the market or put into service.
- Liability considerations for AI providers. The Commission emphasises that the prohibitions apply to any AI systems, whether with an ‘intended purpose’ or ‘general-purpose.’ Deployers are expected not to use any AI system in a manner prohibited under Article 5 AI Act, including not bypassing any safety guardrails implemented by the providers. With regard to providers, the Commission expects them:
- to build in safeguards and prevent and mitigate harmful behaviour and misuse.
- to include provisions in their contractual relationships with deployers (eg in the terms of use) that:
- exclude use of their AI system for prohibited practices;
- provide ‘appropriate information’ to deployers; and
- establish necessary human oversight.
While this may be particularly challenging for ‘general-purpose’ AI system providers who may not have sufficient visibility of how deployers use their systems, operators are only required to take the measures ‘appropriate,’ ‘feasible,’ and ‘proportionate’ to their AI systems and to the circumstances of the case.
Prohibitions of AI-enabled harmful subliminal techniques, manipulation and deception, and exploitation of vulnerabilities
The first two prohibitions in Article 5(1)(a) and (b) aim to protect individuals and vulnerable persons from significantly harmful effects of AI-enabled manipulation and deception, and exploitation.
The Commission provides detailed definitions and examples of these prohibited techniques:
- Subliminal Techniques. These are techniques capable of influencing behaviour in ways where the person remains unaware of the influence, how it works, or its effects. For example, this could include hidden images within visual content that aren’t consciously perceived but may still be processed by the brain and influence behaviour.
- Purposefully Manipulative Techniques. These techniques are designed or objectively aim to influence, alter, or control an individual’s behaviour. The Commission notes that merely incidental manipulative behaviour may not be covered under certain conditions. A key example would be an AI system that deploys background audio or images leading to mood alterations (such as increasing anxiety) that influence users’ behaviour.
- Deceptive Techniques. These involve presenting false or misleading information with the objective or effect of deceiving individuals and influencing their behaviour. A notable example is an AI chatbot that impersonates a friend or relative using synthetic voice technology to pretend it is that person.
- Exploitation of Vulnerabilities. This refers to objectively making use of age, disability, or specific socioeconomic situations in a manner that is harmful for the exploited persons or others. For instance, certain AI-enabled differential pricing practices in insurance services that exploit specific socio-economic situations to provide higher prices to lower-income consumers would fall under this category.
The Commission emphasises two crucial conditions that must be met for these prohibitions to apply:
- The practices may have either the objective or the ‘effect’ of causing material distortion of natural persons’ behaviours. These effects can be simply ‘likely’ or ‘capable’ of materialising, requiring an objective assessment of circumstances, knowledge, and available information.
- The practices must cause or be reasonably likely to cause ‘significant harm,’ which is broadly defined to include physical, psychological, financial, and economic harm.
The underlying rational is to protect individual autonomy and well-being from manipulative, deceptive, and exploitative AI practices that can subvert and impair an individual’s autonomy, decision-making, and free choices.
Prohibition of AI-enabled social scoring
The AI Act prohibits AI practices that (i) evaluate or classify people over time, (ii) based on their social behaviour or personal or personality characteristics when they (iii) lead to detrimental or unfavourable treatment in social contexts unrelated to the contexts in which the data comes from, and/or when they lead to unjustified or disproportionate treatment to the gravity of the social behaviour (Article 5(1)(c)). The objective of this prohibition is to target scoring practices that treat or harm people unfairly and cause social control.
The Commission provides detailed guidance on how to read the different elements of the provision. Here are the key points:
- Evaluation or classification system
- ‘Evaluation or classification’ should give rise to a score that can take various forms such as a number, a ranking, label etc. With respect to the term ‘evaluation,’ the Commission refers to the concept of ‘profiling’ under the GDPR as being a specific form of evaluation.
- ‘Social behaviour’ is seen as a broad term that can generally include actions, behaviour, habits, and interactions within society, and usually covers behaviour-related data points from multiple sources. This includes social behaviours in business contexts, for example the payment of debts or behaviour when using certain services.
- ‘Personal or personality characteristics’ covers a broad category of objective or subjective characteristics related to a natural person, including gender, race, ethnicity, address, income, health, personal preferences and interests, behaviour, financial liquidity, level of debt, type of car, performance at work etc.
- Detrimental treatment and its links to scoring
- The causal link between the social score and the treatment is crucial: the detrimental or unfavourable treatment must be the consequence of the score. The Commission clarifies several important points:
- If the AI-generated score is combined with a human assessment, the prohibition applies if the AI-generated score plays a ‘sufficiently important’ role in the final decision.
- The prohibition applies even if the social score is produced by an organisation different from the one that uses the score. For example, a public authority may obtain a score for a natural person's creditworthiness assessment produced by another company.
- A treatment in ‘unrelated social context’ would in most cases happen against the reasonable expectations of the persons and in violation of GDPR.
- To determine if the treatment is ‘disproportionate to the gravity of the social behaviour,’ the Commission calls for a case-by-case assessment.
The Commission emphasizes that the prohibition has a broad scope of application in both public and private contexts. For example, an unacceptable social scoring practice would be a private credit agency using an AI system to determine the creditworthiness of people and deciding whether an individual should obtain a loan based on unrelated personal characteristics.
- What falls outside the prohibition of social scoring:
The Commission lists situations which are out of scope of the prohibition:
- Scoring of a legal entities (unless based on the evaluation of natural persons’ social behaviour).
- Individual ratings by users (unless combined with other information and analysed by AI).
- More generally, the Commission points out that social scoring is only prohibited in the limited cases where all the conditions listed in the prohibitions are fulfilled. For instance, practices in credit-scoring, targeted commercial advertising and profiling, anti-money laundering and other financial fraud surveillance, would typically fall outside the scope when they are:
- Based on relevant data;
- Complying with sectoral Union legislation (for example, consumer protection, data protection and digital services); and
- Involving treatment justified and proportionate to the social behaviour.
These practices remain subject to case-by-case assessment to ensure compliance.
Understanding these nuances is crucial for companies developing or deploying AI systems to ensure compliance with the new regulatory framework.
In our next blogpost ‘European Commission releases critical AI Act implementation Guidelines - Prohibited AI Practices (Part 3)’ we will delve deeper into the remaining prohibitions covered in the Guidelines (ie facial recognition databases, emotional recognition at the workplace or in education institutions, predictive crime, biometric categorization, real-time remote biometric identification).
You can find all episodes of our EU AI Act unpacked blog series by clicking here.