This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 6 minute read

EU AI Act unpacked #14: AI Liability Directive: Is a new Directive needed to supplement the updated product liability regime?

In this part of our EU AI Act unpacked blogpost series we explore the AI liability regimes which aim to complement the AI Act by introducing new rules specific to damages caused by AI systems. In an ever-evolving technology-based world, the EU legislator is crafting new legal frameworks and updating existing ones to address new challenges around liability stemming from the use of new technology placed on the EU market or used in the EU. Earlier this year, the new Product Liability Directive (PLD) has been adopted, now expressly covering software, including AI systems. This has somewhat outpaced the developments around the draft Artificial Intelligence Liability Directive (AILD), which has been presented by the European Commission in 2022 and where negotiations were suspended until the adoption of the closely linked AI Act.

Now, with the recent adoption of the AI Act, the focus shifts back to the liability regime around the use of AI. In that context, looking at the updated product liability regime with its extended scope covering AI, one might ask whether the introduction of the planned AILD is still necessary. This blog post will shed light on the distinct approaches pursued by the PLD and the AILD and on potential implications for businesses involved in the development or the deployment of AI systems.

PLD and AILD - A complementary framework

When the European Commission published the draft of the AILD, the new PLD was already on the horizon. And yet, with the idea to create a "complete package" for handling liability claims related to AI, both initiatives have been driven forward, suggesting that both regimes have each their own scope of application, addressing different aspects in the context of liability claims related to damages caused by AI systems. 

In fact, while both regimes address situations in which damages were caused by an AI system or AI supported products, there are substantial differences in terms of the liability concepts on which the initiatives are built, types of damages covered, circumstances that trigger the relevant liability regime, types of claimants entitled to bring a claim, and the types of businesses across the value chain of AI systems subject to potential claims. 

1. Conceptual approach: The PLD and the AILD are based on entirely different concepts, with the PLD establishing a no-fault-based (strict) product liability regime, making manufacturers of defective products strictly liable, irrespective of whether the relevant harm was caused by their fault. From a procedural perspective, the PLD makes it easier for claimants to claim damages, as it requires that, at the request of the injured person, it is the defendant who shall disclose relevant evidence. If the defendant fails to disclose relevant evidence, the defectiveness of the product shall be presumed. Similarly, the PLD stipulates that the causal link between the defectiveness of the product and the damage shall be presumed where it has been established that the product is defective and that the damage caused is of a kind typically consistent with the defect in question. It is therefore on the defendant to rebut the presumptions stipulated by the

The AILD, on the other hand, proposes a targeted reform of national fault-based liability regimes, by introducing modifications to the burden of proof. Whereas the claimant under the current regime must demonstrate and prove the fault, the damage, and the causal link between the two, the AILD allows claimants to ask a court to order the disclosure of information about high-risk AI systems. Where the defendant fails to comply with that order, it shall be presumed that the defendant was non-compliant with the relevant duty of care. The proposed AILD further introduces a rebuttable presumption of a causal link between the fault of the defendant and the output produced by the AI system (or the failure to produce an output). The claimant will still have to demonstrate though that the output produced by the AI system (or the failure to produce an output) gave rise to the damage. 

2. Types of damages covered: The two regimes also differ in terms of damages which claimants are entitled to claim. The PLD allows claimants to bring a claim for damages against the manufacturer if the defective product has caused death, personal injury, including medically recognised psychological harm, damage to property (except for the defective product itself, or where a product was damaged by a defective component integrated into that product, or property used exclusively for professional purposes) or data loss.

No such limitations are included in the proposed AILD, which does not limit liability to specific types of harm. Rather, under the proposed regime, claimants can potentially claim damages for any type of damage, to the extent it is covered by applicable national law. This may include harm resulting from discrimination or breach of fundamental rights like privacy. 

3. Circumstances that trigger the relevant liability regime: While the PLD focuses on defective products, which are defined as products that do not provide the safety that a person is entitled to expect or that is required under Union or national law. This includes safety requirements included in the AI Act, such as security requirements specified in the AI Act.

By contrast, the AILD refers to non-compliance with a relevant duty of care, without requiring that the breach of a duty of care results in a “defective product”. In that respect, the AILD refers to specific provider / deployer obligations laid down in the AI Act, eg instances in which a high-risk AI system was not developed on the basis of training, validation and testing data sets that meet the quality requirements referred to in the AI Act; or where the deployer of a high-risk AI system failed to comply with its obligations to use or monitor the AI system in accordance with the instructions of use. 

It is worthwhile noting that the AILD also applies to AI systems that do not fall under high-risk AI systems and which therefore may not be subject to specific obligations under the AI Act. In this respect, the presumption of the casual link between the fault of the defendant and the output produced by the AI system (or the failure to produce an output), shall apply where the court considers it ‘excessively difficult’ for the claimant to prove that causal link.

4. Types of claimants entitled to bring a claim: While under the PLD, only natural persons are entitled to bring claims, the proposed AILD captures natural and legal persons. References to ‘potential claimants’ in the AILD make clear that the rights under the proposed directive shall not only apply in the context of the actual proceeding, but also in the pre-litigation phase, where the person concerned is considering but has not yet brought a claim of damages.

5. Types of businesses across subject to potential claims: As regards the types of businesses along the value chain of an AI system that are impacted by the two liability regimes, the PLD only applies to manufacturers, ie manufacturers of defective products and manufacturers of defective components which caused a product to be defective. Where the manufacturer is established outside the EU, and without prejudice to the liability of that manufacturer, the liability may also apply to the importer of the defective product or component, the authorised representative of the manufacturer, or the fulfilment service provider, where applicable.

Under the proposed AILD, the EU Commission clarified in its FAQ that the proposed regime shall cover the liability of any person, which can be the provider, developer or the deployer of the AI system. 

Outlook and key takeaways:

  • Even though in practice, the strict liability approach under the PLD might be more attractive for claimants, there might be instances in which claimants may wish to rely on the AILD regime instead, eg where non-compliance with a duty of care does not result in a defective product, as required under the PLD, or where the claimant wishes to bring a claim against a user of an AI system rather than against the manufacturer of the relevant AI product. 
  • The AILD is expected to receive backing from consumer protection groups and other advocacy organisations. One key reason is that those organisations are often the drivers for (collective) actions by consumers and other groups considered vulnerable vis-à-vis large companies. If enacted, the proposed AILD would become part of the scope of the EU Representative Action Directive, allowing for a collective redress in cases of AI-related damage. This aligns with the broader EU strategy of enhancing consumer rights and access to justice, making the directive a favorable instrument for these groups.

    In our next blog, we will take a closer look at how when and how the AI Act will continue to evolve over the coming months and years via level II legislation and guidance from the Commission.

Tags

eu ai act series, eu ai act, ai