This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minute read

EU AI Act unpacked #23: European Commission releases critical AI Act implementation guidelines (Part 1) - Definition of AI systems

The European Commission has taken a significant step forward in clarifying the EU AI Act by releasing new implementation guidelines in early February 2025. These guidelines specifically address two fundamental aspects of the Act: the definition of AI systems (Art. 3 (1)) and prohibited AI practices (Art. 5). While technically still in draft form, the guidelines have received Commission approval and are expected to remain substantially unchanged before their Official Journal publication.

For businesses developing or deploying AI systems, these guidelines serve as a crucial roadmap for compliance, reflecting how the Commission’s AI Office will coordinate enforcement across the EU. Though non-binding, they provide essential interpretation that both companies and national regulators will rely upon.

In this blogpost we give an overview on the guidelines on the definition of AI systems (Part 1). Our following blogposts will cover the guidelines on prohibited practices (Part 2 and Part 3). 

You can find all episodes of our EU AI Act unpacked blog series by clicking here.

Understanding the AI system definition

The Commission emphasises that determining whether a system qualifies as AI requires a nuanced analysis of its specific architecture and functionality, rather than a mechanical checklist approach. This answers concern of some companies, that was expressed during the preparation of the EU AI Act, that traditional automated systems, that have been running for a long time, might be caught by accident in the new AI regulation. The following elements define an AI system under the guidelines, each requiring careful consideration in your compliance strategy. 

While those elements are cumulative, the Commission specifies that they should not necessarily be present continuously throughout both the pre-deployment phase and the post-deployment (or ‘use’) phase of the AI system. Some elements may appear at one phase only.

1. Machine-based system – a technology-neutral approach

The Commission adopts a deliberately broad, technology-neutral interpretation that encompasses both hardware and software components enabling AI functionality. This approach extends from traditional computing architectures to advanced systems like quantum computing, ensuring the guidelines remain relevant as technology evolves.

2. Autonomous operation – degrees of independence

Systems must demonstrate some independence from human involvement, though this doesn’t mean complete automation. The Commission clarifies that systems requiring manual inputs may still qualify if they generate outputs autonomously. ‘Autonomy’ in this context refers to having some degree of independence in actions.

Systems capable of operating with limited human intervention may trigger particular risks and human oversight measures. The Commission places responsibility on providers to evaluate the need for enhanced human oversight on a case-by-case basis, considering the specific risks and capabilities of each system.

3. Adaptive capabilities – optionality over necessity

The Commission takes a flexible approach to adaptiveness, considering that systems don't need to possess adaptive or self-learning capabilities (i.e. allowing the behaviour of the system to change during use). While not a decisive condition, adaptiveness seems to be used as an illustration of the type of tools that may qualify as an AI system. Therefore, the fact that a tool shows adaptiveness or is capable of adaptiveness (even if not activated or employed) should, if all other criteria are met, corroborate the qualification of an AI system.

4. System objectives – both explicit and implicit

The guidelines distinguish between explicit and implicit objectives. Explicit objectives are clearly stated goals directly encoded by developers, such as the optimization of cost functions, probabilities, or cumulative rewards. Implicit objectives, on the other hand, are goals deduced from system behaviour or underlying assumptions. For specific examples of implicit objectives, reference can be made to the OECD’s Explanatory Memorandum on the Updated OECD Definition of an AI System.

5. Inference capabilities – understanding the boundaries

The Commission provides detailed guidance on what constitutes ‘inference’, encompassing three main approaches. 

First, machine learning approaches include supervised, unsupervised, and self-supervised learning, as well as deep learning and reinforcement learning. Systems must demonstrate more sophisticated capabilities than basic statistical learning to qualify under this category.

Second, logic and knowledge-based approaches encompass systems that learn from encoded expert knowledge, utilise symbolic representation, and include deductive and inductive reasoning engines. Third, deterministic methods must demonstrate more sophisticated analysis than basic optimisation and show capability for pattern analysis and autonomous output adjustment.

The Commission explicitly excludes several types of systems from the AI definition. Basic mathematical optimisation systems using established formulas, simple data processing following predefined human instructions, classical heuristic systems using predefined rules, and basic prediction systems using only statistical learning rules all fall outside the scope of the regulation.

6. System outputs – impact and influence

AI systems must generate outputs that can include predictions, content, recommendations, or decisions. These outputs should demonstrate more nuanced capabilities than conventional systems, showing an ability to leverage learned patterns or expert-defined rules. The Commission emphasises that AI systems typically offer more sophisticated analysis and adaptation than traditional software.

7. Environmental impact – active influence required

Systems must actively impact their operating environment to fall within the scope of the regulation. This impact can occur in physical environments through tangible objects, or in virtual environments through digital spaces, data flows, and software ecosystems. Passive systems without environmental impact fall outside the definition, emphasising the Commission's focus on systems that create meaningful change in their operating context.

Practical implications for businesses

For businesses developing or deploying potentially AI-regulated systems, thorough documentation becomes crucial, particularly when claiming exclusion from the AI system definition. Companies should maintain detailed technical documentation explaining why their system falls outside the scope and assess systems individually, considering their specific architecture and functionality rather than applying blanket categorisations.

Special attention should be paid to autonomous operation capabilities and human oversight requirements, as these may trigger additional compliance obligations. Companies should also review both explicit and implicit objectives of their systems, as both can bring a system within scope of the regulation.

The Commission indicates that these guidelines may be updated based on implementation experience, national regulatory interpretations, and Court of Justice of the European Union decisions. Businesses should monitor for such developments and adjust their compliance strategies accordingly, maintaining flexibility in their approach while ensuring robust documentation of their compliance rationale.

 

Tags

ai, eu ai act, eu ai act series