This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

EU AI Act unpacked #30: Commission launches consultation on serious AI incident reporting rules

Introduction

On 26 September 2025, the European Commission opened a public consultation on its draft guidance and standard reporting template for “serious incidents” under the EU AI Act. The consultation closes on 7 November 2025.

Article 73 of the AI Act sets reporting obligations for providers of high-risk AI systems when a serious incident occurs. The draft guidance highlights the purpose of these duties – early warning for authorities, accountability for providers (and, to a degree, users), timely corrective measures, and transparency to build public trust – and how the notification regime is meant to work in practice.

The Commission also notes that other international incident frameworks exist – most notably the OECD’s AI Incidents Monitor and Common Reporting Framework – and that the EU’s incident monitoring seeks to align with the OECD’s framework wherever possible. The draft guidance explicitly specifies that it applies solely to Article 73 reporting concerning high-risk AI systems. It does not address the distinct obligation under Article 55(1)(c), which requires providers of general-purpose AI models presenting systemic risk to inform the AI Office of any serious incidents.

Key terms and concepts

The Commission’s draft outlines what counts as a “serious incident” and a “widespread infringement” under Article 73 of the EU AI Act. A serious incident is an incident or a malfunction of a high-risk AI system that directly or indirectly leads to one of four outcomes: 

  1. the death of a person, or serious harm to a person’s health;
  2. a serious and irreversible disruption of the management or operation of critical infrastructure;
  3. the infringement of obligations under Union law intended to protect fundamental rights;
  4. serious harm to property or the environment

By contrast, a widespread infringement is a cross-border act or omission contrary to EU law that harms (or is likely to harm) the collective interests of individuals, with common features and occurring concurrently in several Member States.

Causation

Notably, the guidance sets out that causation may be direct or indirect. An AI output can still trigger reporting if it contributes to the harm via a human decision or a downstream process, provided the system was used for its intended purpose or in a reasonably foreseeable way.  The Commission provides concrete examples, such as an incorrect analysis of medical imaging leading to a wrong diagnosis or treatment, a patient being wrongly classified as low risk, so a condition is missed, and a recruitment tool discarding highly qualified candidates because of gender or ethnicity.

Thresholds and examples

The draft guidance then adds explanations for each of the above:

  1. Health. "Serious harm” includes life-threatening illness or injury, temporary or permanent impairment, hospitalisation or its extension, necessary medical interventions to prevent such outcomes, chronic disease, serious psychological conditions, and foetal distress, death, or congenital abnormalities.
  2. Critical infrastructure. There is no statutory threshold for what constitutes “serious”; the draft guidance looks at existing definitions under Critical Entities Resilience (CER)/Network and Information Security Directive (NIS2) for orientation. A disruption is “serious,” for example, where there is an imminent threat to life or public safety, destruction of key infrastructure, or disruption in social and economic activities. It is “irreversible” where restoration is not reliably possible without long lead times – such as needing to rebuild physical infrastructure or specialised equipment, contamination of water, soil or air, loss or corruption of essential records (e.g., patient data or civil registries), permanent disablement of a critical node (e.g., a rail junction, power substation or landing station), or loss of a space-based asset. If it is unclear within the 2-day window whether a disruption is irreversible, an initial report should be submitted.
  3. Fundamental rights. For fundamental rights, only serious violations that affect many people are reportable; one-off or minor issues are not. Examples include an AI-based recruitment system excluding candidates based on ethnicity or gender, a credit-scoring system excluding certain categories of persons (for example, by name or neighbourhood), and a biometric identification system that frequently misidentifies people of different ethnic background.
  4. Property and environment. For property, the guidance suggests looking at economic impact, cultural or historical significance, permanence, and wider effects; damage is serious where the asset can no longer be used for its intended purpose and should in any case exceed 5% of the purchase price. For environmental harm, it points to existing EU instruments and asks organisations to consider the baseline condition, duration, extent and reversibility of the damage, with examples such as contamination of environmental resources and disruption of natural ecosystems.

Reporting duties

Providers of high-risk AI systems

Under the Guidance, Providers shall notify the Market Surveillance Authorities (MSAs) of the Member States where the incident occurred. If the exact place is unknown, the deployer’s business location shall be used. Reports must be filed immediately, and no later than 15 days from when the provider becomes aware of the serious incident. For urgent cases, the AI Act requires filing immediately, but no later than 2 days in the event of a widespread infringement or a serious and irreversible disruption of the management or operation of critical infrastructure, and immediately, but no later than 10 days if the incident resulted in death. If needed, an initial, incomplete report may be submitted.

Providers shall investigate without delay, including a risk assessment of the incident and corrective action. They “shall not perform any investigation which involves altering the AI system […] in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action.” In practice, the guidance flags changes that could make later analysis unreliable such as, updating/replacing/reconfiguring components directly involved, overwriting datasets or disabling monitoring tools needed for the investigation, or modifying log files, sensor data, or decision-making algorithms.

Lastly, the guidance provides that providers shall cooperate with competent authorities (and, where relevant, notified bodies) during investigations. This includes responding to such authorities within a reasonable time, which the Commission interprets as within 24 hours.

Deployers

When deployers identify a serious incident, they must immediately inform the provider, and then the importer or distributor as well as the relevant market authorities; “immediately” is understood as within 24 hours. If the deployer cannot reach the provider (including where the provider does not answer within 24 hours), the provider obligations apply mutatis mutandis to the deployer.

Authorities

MSAs must take appropriate measures within 7 days of receiving a notification; where the incident concerns fundamental rights, they must inform the national public authorities or bodies responsible for supervising or enforcing those obligations. National competent authorities must immediately notify the Commission of any serious incident, whether or not action has been taken. The AI Board may evaluate and review the incident-reporting regime (Article 66(e)).

Interplay with other EU incident-reporting regimes

When a high-risk AI system listed in Annex III of the AI Act – for example, systems used to manage critical infrastructure in energy, water, transport and digital networks, or systems used for credit and insurance decisions relating to access to essential private services – is already covered by another EU law with equivalent incident-reporting duties (e.g., CER, NIS2, DORA, MDR/IVDR), Article 73 only applies to incidents involving fundamental-rights obligations (Article 3(49)(c)). In practice, the underlying incident continues to be reported under the sector regime, and the EU AI Act is used only to notify the fundamental-rights aspect of the same event.

In addition, the draft notes that there might be situations where incident reporting obligations overlap with Article 33 of the General Data Protection Regulation (GDPR), when there is a “personal data breach,” and other laws and regulations. These are separate tracks that may run in parallel with an Article 73 notification. The Commission will provide further detail on how sectoral and horizontal legislation interacts with the EU AI Act. 

Lastly, the Commission’s draft reporting template includes a field titled “Information about other incident obligations.” Completing it allows an Article 73 notice to reference any parallel sectoral reports (and the legal basis used), helping authorities understand how the same event is being handled across regimes.

Tags

cyber security, data, data protection, eu ai act, eu ai act series, eu digital strategy