Exponential improvements in artificial intelligence (AI) and other technologies have recently led to an explosion of interest and investment in AI by businesses across the world. That has prompted a wide range of, often urgent, AI-related questions to be put to in-house counsel.
This blog provides a short summary of the top actions that the legal counsel of an organisation contemplating using AI tools should take.
1. Understand the types of legal and reputational risks
Track the main legal and reputational risks that may be engaged by the use of the AI.
- intellectual property, including whether you: (1) are entitled to use your existing content to train the AI system; (2) own any improvements you make to the AI system; and (3) have rights to the AI system’s outputs;
- liability for the AI system’s outputs;
- data protection, including managing the use of personal data as an AI input;
- compliance with other applicable laws and regulations (eg, antitrust, consumer protection, bank secrecy regulation and export control rules);
- ethical considerations, which often overlap with privacy and consumer protection rules; and
- for employee use cases, employment law risks such as those associated with employee monitoring and employee data.
For further information, see Generative AI: Five things for lawyers to consider.
We’ll also be diving deeply into these topics in future blogs linked on this page.
2. Track developing AI-related laws and how they might impact your plans
Be aware that regulation of AI, data and technology is a fast-developing area globally.
For example, an increasing number of jurisdictions (including recently India and various US states) have enacted or announced privacy laws that are similar to the EU’s GDPR or which impose other challenging requirements that may be relevant to the development or deployment of AI.
Some jurisdictions, such as China, have introduced laws that specifically target AI, while other jurisdictions, such as Canada, Thailand, Brazil and the EU, are in the process of developing such laws. In addition, other jurisdictions, such as the UK, have taken steps to adapt existing regulatory structures and approaches to address AI.
Organisations deploying AI need to keep abreast of future legislation and build anticipated requirements into their plans. For example, organisations with an EU-nexus that are implementing AI systems need to consider what aspects of the EU’s AI Act are likely to apply to them.
3. Implement strong AI governance
Strong governance is crucial, including in relation to:
- establishing appropriate training and clear policies for the development, deployment and use of the AI system;
- ensuring that the AI system is thoroughly tested and validated prior to launch;
- documenting the AI system’s design, operation, and limitations;
- identifying suitable use cases and triaging for higher risks that merit a deeper legal review;
- issuing frequent and comprehensible disclosures to users about the limitations or known bugs;
- monitoring the AI system’s performance, errors, and any potential biases on an ongoing basis, with a plan in place to address any issues promptly;
- implementing privacy measures compliant with data protection law; and
- adapting other policies and processes, such as seeking adequate insurance coverage where possible.
Click here for more information on risks and mitigants in the context of generative AI.
4. Tailor the approach to your sector and organisation
Ensure your approach is tailored to the laws and regulatory frameworks applicable to your sector and individual organisation, alongside their respective culture and values.
For further information, see our blog posts:
- The European Medicines Agency’s proposed risk-based approach to AI in pharma lifecycle
- The UK Financial Conduct Authority’s evolving regulatory approach to AI
Look out for our forthcoming AI blogs
We are working closely with many businesses across various sectors as they develop and implement AI systems at all levels of the AI value chain.
In the coming months we will publishing a series of further blog posts, drawing on our experience to expand on many of the points highlighted in this article.
Other Freshfields resources that you may be interested in
We have already published articles on various AI topics in addition to those referenced above. A selection of those includes:
- The history of AI
- G7 hope to kick off global alignment on AI governance
- The AI Safety Summit – what are its objectives and what steps are countries taking to regulate AI?
- The White House’s “Blueprint for an AI Bill of Rights”: The Biden Administration’s vision for AI
- The UK’s proposed approach to AI regulation
- The UK plans to liberalise automated decision-making
- The EU’s approach to AI and liability: a broadening of legislation?
- Opportunities, boundaries and uncertainties of text and data mining in AI
- The US fair use defense for generative AI tools after Warhol v Goldsmith
- Who owns the rights to AI-generated content under US laws
In addition, see our EU Digital Strategy Hub for further resources on the EU’s approach to regulating AI: