This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 6 minutes read

The EU’s approach to AI and liability: a broadening of legislation?

The EU’s approach to AI and liability: a broadening of legislation?

The European Data Protection Supervisor (EDPS) recently addressed the data protection implications of two legislative proposals for AI and made a clear recommendation to equally consider damages caused by high-risk or non-high risk AI systems, thus broadening the scope of the rules concerning civil liability stemming from AI systems. 

Indeed, if the EDPS’s recommendations are accepted by the European co-legislators and ultimately reflected in the final text of these proposals, we can expect a significant set of requirements applicable not only for high-risk AI systems, but also for non-high-risk AI systems.  

 

Background: the legislative proposals for AI examined by the EDPS 

The EDPS on 11 October published its opinion (n. 42/2023) (the Opinion) addressing the implications, from a data protection perspective, of two of the most relevant legislative proposals currently under consideration within European institutions on AI:  

  • the Proposal for a directive on liability for defective products (the PLD Proposal) revising the existing Product Liability Directive (PLD1); and 
  • the Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (the AILD Proposal).  

The PLD1 was adopted nearly 40 years ago to ensure an equal level of consumer protection throughout the single market, leveraging the concept of no-fault-based liability of producers for damage caused by defective products. In revising the existing set of rules, the new PLD Proposal establishes that software must be considered a product in the scope of the directive, and the existing notion of ‘damage’ must be extended to cover ‘the loss or corruption of data.’

The main objective of the AILD Proposal is to ensure an effective access to compensation for natural persons suffering damage stemming from AI systems, with the indication for member states to review the appropriateness of their domestic civil liability systems.

These two proposals, (jointly referred to as the Proposals) are part of a package of measures to support the deployment of AI in Europe and must be read in conjunction with the horizontal rules laid down in the proposed Regulation on Artificial Intelligence (the AI Act). The AI Act introduces EU-wide minimum requirements for AI systems and proposes a sliding scale of rules based on the risk: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ will be strictly prohibited and those considered as ‘high-risk’ will be permitted but subject to the most stringent obligations.

 

The EDPS recommendations

The Opinion rendered by the EDPS includes some interesting considerations which have a potentially significant impact, extending the scope of action of the various pieces of European legislation concerning AI. 

Namely, the Opinion includes some sensible recommendations to ensure consistency of the Proposals with the Union regulatory framework on AI (mainly the AI Act and Data Governance Act), as well as with the provisions ensuring data protection under GDPR. 

In addition, the Opinion is clearly an EDPS effort to significantly broaden the scope of the AILD Proposal, and to consider equally, and not differentiate between, individuals affected by AI systems based on the classification of the AI system in question as high-risk or non-high risk.   

In a nutshell, the Opinion includes the following recommendations (in order of potential impact).
 

1. To extend the procedural safeguards provided in the AIDL Proposal, namely the disclosure of evidence (Article 3) and the presumption of causal link (Article 4), to all cases of damages involving an AI system, irrespective of its classification as high-risk or non-high risk.

a. EDPS note that the main objective of the AILD Proposal is to ensure the same level of protection in compensation, irrespective of the involvement of an AI system. Given AI systems’ complexity and opacity, the AILD Proposal sets out a new mechanism of disclosure of evidence (Article 3) and a rebuttable presumption of a causal link between providers’ or users’ fault and an AI system’s output (Article 4).

b. EDPS flags that, in the current AILD Proposal draft, these two mechanisms are intended to operate primarily in cases involving high-risk AI systems. However, actual harm caused by non-high-risk AI systems can still be significant. Also, non-high-risk AI systems might be similarly complex and opaque (black box) so claimants seeking compensation could face difficulties accessing evidence to identify the potential fault.

c. EDPS recommends not differentiating between individuals affected by AI systems classified as high-risk or non-high risk.    

 

2. To minimise the risk of circumvention of the new AI liability rules by providers and users by deleting the last two sentences of Recital 15 of the AILD Proposal meant to exclude from the scope of the AILD all damages caused by a human assessment followed by a human act or omission, while the AI system only provided information or advice considered by the relevant human actor.

a. Indeed, the EDPS recommends that Recital 15 of the AILD Proposal is limited to clarify that the set of liability rules set out in the AILD should only cover claims for damages when the damage is caused by an output or the failure to produce an output by an AI system through the fault of a person, for example the provider or the user under the AI Act. It remains understood that in such cases, the claimant would have to follow the standard liability rules without any possibility to alleviate the burden of proof by taking advantage of the procedural safeguards foreseen in Article 3 and 4 of the AILD Proposal.  

 

3. To ensure that the information disclosed by providers of high-risk AI systems pursuant to Article 3 of the AILD Proposal is accompanied by explanations in an intelligible and generally understandable form.

a. Namely, EDPB expressly recommends introducing in the AILD Proposal a specific requirement that information disclosed under Article 3 is not limited to technical documentation (intelligible by experts), but also includes clear and comprehensible explanations.

 

4. To ensure that individuals who have suffered damages caused by AI systems produced and/or used by EU institutions, bodies and agencies are not placed in a less favourable position and have equivalent protection as provided for in the Proposals.

a. EU institutions, offices, bodies and agencies (EUIs) are not directly subject to the GDPR. The EDPS is specifically responsible under Regulation 2018/1725 for ensuring that the fundamental rights, including data protection, of individuals is respected by EUIs.

b. The EDPS noted that the current draft of the AI Act proposal expressly applies to EUIs as provider or user of AI systems, yet neither the AILD Proposal nor the PLD Proposal appears to apply in cases of damages stemming from AI systems produced and/or used by EUIs.

c. As a result, EPDS calls upon the co-legislator and the Commission to consider measures to ensure individuals suffering damages caused by AI systems produced and/or used by EUIs are not in a less favourable position.

 

5. To consider additional measures to further alleviate the burden of proof for victims of AI systems to ensure the effectiveness of EU and national liability rules.

 

6. To add an explicit confirmation that the AILD Proposal is without prejudice to Union data protection law reflected in GDPR to ensure consistency of the AILD Proposal with the main Union regulatory framework on AI.

a. Under the current draft PLD Proposal, a person suffering damage may have a choice whether to base claims on the revised PLD or the GDPR’s relevant provisions (or the Directive 2016/680 regarding the processing of personal data), or both. By contrast, the current AILD Proposal does not mention the Union rules on protection of personal data among the rules that should not be affected by the AILD.

b. Furthermore, the EPDS underlines the importance of consistency in the roles and responsibilities of various parties (providers, manufacturers, importers, distributors and users of AI systems) with the notions of data controller and data processor in the data protection framework. 

 

7. To shorten the review period laid down in the AILD Proposal, which is currently envisaged as five years. Indeed, given the fast-paced evolution of AI, five years seems a relatively long period. It would seem sensible to shorten this to ensure a timely understanding of the effectiveness of the new system of liability rules laid down in the AILD. 

 

Freshfields Hub on AI legislation and regulation

To remain informed and stay ahead of the curve with the latest insights and developments on AI regulation, please explore our hub on the EU Digital Strategy. You can find more information here: EU Digital Strategy | Freshfields Bruckhaus Deringer.

We will continue monitoring the legislative developments on AI closely, supporting our clients in navigating and complying with the regulations and directives that are already in effect. In addition, we actively collaborate with European institutions to shape the regulations with support from our EU Policy Team in Brussels. Furthermore, we will continually assess how these EU regulations compare with similar regulations being enacted or considered in other countries to help our clients formulate a comprehensive global strategy.

 

Indeed, if the EDPS’s recommendations are accepted by the European co-legislators and ultimately reflected in the final text of the AI Liability Directive and the one of the Product Liability Directive, we can expect a significant set of requirements applicable not only for high-risk AI systems, but also for non-high-risk AI systems

Tags

ai, eu digital strategy, eu ai liability directive, eu ai act, data protection, data