The AI Act and beyond – potential pitfalls for contracting on AI-powered products and services
[You can find all episodes of our EU AI Act unpacked blog series by clicking here.]
Given the EU AI Act will soon become applicable, organisations should proactively consider its implications when assessing new AI-related business models and drafting or entering into corresponding contracts. In this blog post, we look at the current legal framework concerning liability regarding AI systems and assess the most critical aspects when drafting contracts dealing with such.
Statutory provisions for AI liability
Questions on liability in relation to the development and use of AI have gathered significant attention. While the EU AI Act itself does not directly address liability, the recently adopted Product Liability Directive (PLD) and the draft of the AI Liability Directive aim to establish a harmonised regime for consumer liability claims related to damages caused by AI-based products and services.
The PLD, for example, introduces significant claimant-friendly changes. This includes (i) an extended scope of the definition of the term ‘product’ under the new PLD including software and AI, (ii) rebuttable presumptions as to defectiveness and causation, and (iii) the introduction of disclosure requirements (see here for more details). However, these statutory provisions in the PLD as well as the planned rules on liability in the AI Liability Directive focus solely on non-contractual civil liability towards natural persons. They explicitly do not cover business-to-business liability and do not affect contractual claims.
Contractual allocation of responsibilities and liability
The lack of AI-tailored statutory rules on contractual liability requires businesses to establish customised liability frameworks in contracts concerning AI systems, considering existing rules on general civil liability. Especially for providers of AI systems subject to the EU AI Act, this means that customised liability frameworks in AI-related contracts are not just an opportunity, but rather a necessity.
Most obligations under the EU AI Act relating to AI systems are addressed to providers (see, for instance our blogs #5, #6 and #7. These requirements concern, inter alia, the technical design, development, and training of AI systems. Nevertheless, deployers, as the closest link to natural persons being subject to the AI systems, are accountable for compliance regarding the use of the AI system, even if they didn’t develop the system themselves. For certain obligations, the deployer requires assistance from the provider, eg for retaining logs which may not be feasible for providers who lack control over the system's design.
While the AI Act imposes fixed regulatory responsibilities, internal agreements between parties along the AI value chain can redistribute liability in relation to these responsibilities. For example, deployers could seek contractual indemnities from providers to account for non-compliance stemming from design flaws. That said, such agreements may not and cannot override the provider’s statutory obligations under the AI Act.
Many situations, however, are not as clear-cut and require careful examination of the specific tasks each party undertakes in the AI value chain. For instance, the responsibility for training an AI system could fall on either the provider or the deployer of the respective AI system, depending on their contractual agreement. Therefore, contractual responsibility for compliance with the data and data governance requirements laid down in the AI Act may vary.
Given how easy it can be to qualify as a provider under the AI Act, implementing a well-defined liability regime tailored to the specifics of each case is essential for all involved parties.
Additional risk considerations in AI-related contracts
Whereas the responsibilities and the liability regime might be one of the biggest risks for parties along the AI value chain, there are numerous other areas that can and should be tackled in future contracting. Here are some examples on what to keep in mind and what to expect next when negotiating contracts regarding AI systems:
- Use restrictions: Providers and manufacturers can mitigate liability risks by contractually specifying permissible use scenarios of an AI system and issuing detailed usage instructions.
- Responsibility for IP infringements: The output of an AI system may infringe third party intellectual property rights, eg if the output closely resembles material that is protected under copyright law. There are many scenarios how this can happen, eg the AI could have been trained with copyrighted data, the protected material has been scraped from the internet or the user triggered infringing output using selective prompting strategies. Contracts should include provisions defining responsibility for infringing outputs and indemnification terms to protect against IP claims.
- Rights to AI outputs: If and how the output of an AI system is protected by copyright laws is currently unclear. Contracts should clarify ownership of AI outputs and outline usage rights for deployers as well as potential end users. Providers might also consider restricting the use of the output (eg limiting commercial applications) or requiring additional fees for extended use rights, depending on the business model.
Key takeaways
Drafting contracts for the provision and use of AI systems is a complex and evolving challenge. While the EU AI Act introduces new obligations for providers and deployers of AI systems, the lack of statutory contractual rules means that off-the-shelf templates are unlikely to address the nuances and risks specific to each case. Tailored contractual solutions along the lines of the key points we’ve outlined in this blog post can significantly mitigate risks and ensure a clear allocation of responsibilities. Nevertheless, to start your drafting exercise, the draft EU model contractual AI clauses provided by the AI procurement community – a peer based review group supported by the European Commission – are offering contractual building blocks which can help you operationalise the abstract obligations of the EU AI Act within your contracts.