Introduction
Generative artificial intelligence (‘AI’) tools are algorithms than can be used to generate new content, including text, images or software code. This emerging technology has taken the world by storm recently.
As well as posing interesting ethical questions, the proliferation of AI tools presents some novel challenges for lawyers advising companies that are developing or using these tools. This blog post outlines some of the legal issues that may arise focusing on UK and European laws.
Legal considerations
Regulation of AI
The applicable regulatory rules will depend on the type and intended usage of the AI system, as well as the jurisdiction of the parties involved. We have previously blogged about the proposed regime to regulate AI in Europe under the EU’s AI Act, which is expected to come into force in late 2023. Some have speculated that the AI Act may become a global trendsetter in a similar way to the EU’s regulation of personal data. The European Parliament co-rapporteurs recently proposed a new residual category to the list of high-risk AI applications in the AI Act to cover generative AI systems, on the basis that AI-generated text could be mistaken as human-made.
By contrast, in the UK, there are no laws that are specifically drafted to regulate the use of AI, but it is partially regulated through a patchwork of legal and regulatory requirements built for other purposes. Our previous blog post discusses the UK government’s proposed approach to regulating AI. AI-specific regulation, and related areas of law, such as intellectual property (‘IP’) and data protection/privacy, vary greatly between jurisdictions.
Intellectual property
Early discussion around AI has focused on the IP ownership rights in the content generated by AI (ie, the ‘output’). See for example our previous blog post. More recently, IP issues have arisen where copyright materials have been used to train an AI model (ie, the ‘input’). There is a particular risk of copyright infringement where the AI copies copyright material that is recognisable in the output, and there is even a potential for indirect infringement claims where an input is unlawfully used to create an output that does not, of itself, contain a copy of the input. AI developers should therefore be careful when scraping data to check the applicable permissions and potentially obtain a licence for, or exclude, copyright material. Otherwise, there is a risk of legal claims for damages and injunctive relief from copyright owners.
Another challenge may arise where generative AI is used to create snippets of code as part of software development. Generative AI does not (currently) give any credit to the original authors of the code nor indicate its source, which may create issues to the extent it uses open-source software (‘OSS’). Given that some OSS licences require credit to be given to the original author, and others mandate that all derivative work must also be open source or for non-profit use, companies would need to be careful about incorporating OSS as part of their commercial software products. It would require considerable effort for companies to reverse engineer their software to determine which part of their codebase is covered under an OSS licence. The fact that fragments of these OSS licenced codes may risk the whole software product being incapable of generating profit is a key commercial risk that businesses should consider.
Liability
In some cases, the use of generative AI may cause damage or harm to third parties. For example, if an AI system used in the healthcare industry produces inaccurate diagnoses or treatment recommendations, leading to harm to a patient. Another example could be if a chatbot used in financial services produces fraudulent or negligent investment advice, causing financial loss to a customer. As well as highlighting the need for proper safeguards and oversight to prevent such incidents, this raises questions about who is responsible when things go wrong: both as a result of users mishandling the technology and instances where systems have faulty or flawed designs that produce errors.
Users and developers of AI could face various liabilities, including negligence claims under tort law and product liability claims in the case of AI systems bought ‘off the shelf’. In September 2022, the EU Commission published proposed amendments to the Product Liability Directive, which will include AI under a strict liability regime, and a new AI Liability Directive providing common rules for fault-based liability. Organisations using AI should ensure they understand the liabilities that may arise under current legal frameworks, for example the liability that may fall on an employer for using an AI tool in recruitment that is later shown to unlawfully discriminate. The parties will then need to consider the allocation and mitigation of these risks when drafting their contractual arrangements (eg, appropriate disclaimers, limitations of liability, warranties and indemnities).
Data privacy
AI systems are trained on large volumes of data, which may include information relating to natural persons that is subject to data protection law (personal data). Generative AI tools will also often rely on the processing of personal data (eg, information on users) as part of their operation. As such, ample consideration must be given to ensure that any personal data is used and protected in accordance with applicable data protection and privacy laws.
It is well-known that large fines can be imposed for breaches of EU or UK data protection laws, but regulators can also entirely prohibit non-compliant processing of personal data, which can effectively ‘ban’ entire services. For example, in February 2023 an AI-based chatbot was prohibited by the Italian data protection authority from further processing of personal data of Italian users after the authority found that the service put children and vulnerable people at risk, did not comply with requirements to provide users with certain information, and lacked a valid legal basis for its processing of personal data.
Various trade-offs may arise in the development of generative AI, such as between accuracy and privacy, accuracy and fairness (see the risks of discrimination below), and explainability (the ability to explain the AI model and its output) and accuracy. For example, generally, the more data that an AI system is trained on, the more statistically accurate it will be. However, the interests in training a sufficiently accurate AI system must be balanced against the data minimisation principle in EU and UK data protection law, which requires organisations to process the minimal amount of personal data needed to fulfil the business purpose—this is a fine line.
Decision-making based solely on automated processing is prohibited in many cases under UK data protection laws (with limited exceptions). Data subjects must be given certain information about any automated decision-making, including meaningful information about the logic involved.
Many other protection principles and requirements will be pertinent in considering the development or deployment of AI where personal data is used. Accordingly, strong governance arrangements are vital to ensure a proper process is in place for making difficult decisions.
Ethics and discrimination
A fifth potential issue for AI systems relates to concerns that some AI solutions may operate in ways that are unethical. Ethical concerns may arise, for example, from biases demonstrated by AI models, or from the effects AI may have on users or wider society. Ethical considerations may also arise in relation to the data used or generated by those tools (data ethics).
Unethical outcomes may trigger liabilities under laws (for example, anti-discrimination legislation) or have negative repercussions for relationships with customers or other key stakeholders.
The risk that AI may reflect, and even compound, existing biases and stereotypes in society is a commonly discussed risk that those developing or using AI systems will need to consider. First, patterns of systemic discrimination may be reflected in the training data. For example, one of the key variables may be based on historical data that could reflect biases due to historically unfair practices. Secondly, bias may be present in the algorithm itself if it engages in proxy discrimination as a result of its reliance on correlation. Users and developers of AI therefore need to be careful that the output of AI systems does not demonstrate any biases that risk them breaching anti-discrimination laws, or (to the extent processing personal data) data protection laws.
Practical steps to consider
It is important for organisations developing or working with AI to be aware of the potential legal implications and to stay informed of the latest developments in the laws and regulations regarding AI systems, as well as industry best practices, in order to mitigate risk and ensure compliance.
Companies using generative AI should consider implementing the following steps to mitigate some of the legal risks canvassed above:
- strong governance, including training and clear policies for AI development, deployment and usage;
- review the risk of potential IP infringement which may arise from using generative AI, including by reviewing the terms of use of the AI system and the sources of training data;
- ensure the AI system is thoroughly tested and validated prior to launch;
- monitor the AI’s performance and errors, including any potential biases, with a plan in place to address any issues promptly;
- implement privacy measures compliant with data protection law, including appropriate technical and organisational measures to ensure the security of the processing;
- clearly document the AI system’s design, operation, and limitations; and
- obtain adequate insurance coverage where possible.