The UK government has re-affirmed its commitment to an innovation-focused, common-sense approach to regulating AI in a recently published response to feedback on its 2023 AI White Paper. However, it is unclear how much regulatory progress will be made in advance of a looming UK general election that may result in a new governing party and a change in direction.
Background
In March 2023, the UK government published a White Paper outlining its “pro-innovation” approach to the regulation of AI (see our previous blog post). The White Paper dismissed the idea of wide-ranging EU-style AI Act, mapped out a framework for existing regulators to adopt a sector-specific approach to AI governance, and underscored the importance of a proportionate approach focused on enabling the development of safe AI.
Following its publication, the UK government launched a consultation on the White Paper. The process attracted 409 written responses and was also informed by roundtables, technical workshops, bilaterals, and a programme of ongoing regulator engagement.
On 6 February 2024, the UK Government published its response to the consultation process.
The UK’s approach to regulating AI
The UK government’s response to the consultation emphasised general support for the principles-based, pro-innovation approach outlined in the White Paper while also recognising the need for more detail and greater regulatory coordination and support. In particular:
- 5 key principles: Respondents generally agreed that the UK government’s cross-sectoral principles would address the key risks posed by AI technologies. Those 5 principles are: (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. In its response, the UK government pushed back on calls for additional principles and affirmed its intention to implement the proposed regime on a non-statutory basis in the first instance before considering the desirability of legislation.
- Existing regulators to implement principles: There was strong support for the UK government’s plan to lean on existing regulators to apply the principles within their own regulatory framework. However, the UK government acknowledged that it may need to fill gaps in existing powers and remits. In response to criticisms about a lack of detail in how existing regulators will implement the principles, the UK government published voluntary initial guidance to support regulators in applying the five AI principles.
- Central support: Respondents generally supported a greater degree of central coordination and technical support than originally contemplated. In response, the UK government confirmed it is developing a central function to support, among other things, effective risk monitoring, regulator coordination, and knowledge exchange, including a new steering committee with key regulators.
- New funding: Acknowledging feedback about a current lack of technical capability amongst UK regulators, the UK government has committed £10 million to jumpstart the AI capabilities of regulators. The UK government also announced a £90 million commitment to UK research, including a partnership with the US on responsible AI.
- Future regulation: Recognising concerns about the ability of existing laws to adequately manage risks arising from the rapid development of general-purpose AI, the UK government has left the door open to introduce new regulation in the future, if necessary. The UK government indicated that it expects that the UK will ultimately, at some point, need to develop binding laws for the most advanced general-purpose AI systems, but that it will not be introducing such legislation at this stage.
Other proposals
The consultation response contains a roadmap of actions to expect from the UK government in 2024. Among other things, those include:
- Intellectual property: The UK’s Intellectual Property Office had been due to publish a voluntary code on the interaction between copyright and AI. However, a working group, formed of representatives from the creative industries and leading technology companies, has proven unable to reach an agreement. The response notes that the UK government will now work to resolve the issue with the aim of ensuring ‘AI development supports, rather than undermines, human creativity, innovation, and the provision of trustworthy information’ and that future work would explore ‘mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into AI models.’
- Support for industry and employees: In Spring 2024, the UK government will publish a range of guidance for organisations and individuals, including on AI in recruitment, the value of AI assurance in helping organisations build safe and trustworthy systems, and an AI skills framework to support employers, employees and training providers to identify AI upskilling opportunities.
- International collaboration: Following the success of the first AI Safety Summit, the UK government plans to publish the first iteration of the International Report on the Science of AI Safety in Spring 2024. The government will continue its international partnerships on AI, including supporting the Republic of Korea and France on the next AI Safety Summits.
On 12 February 2024, the Department for Science, Innovation and Technology also published an introductory guide for practitioners about assurance techniques to support the development of responsible AI.
Next steps
At least in the short term, it appears clear that the UK will not be introducing a general ‘AI law’. Further clarity on the application of existing laws to AI will hopefully come as regulators start to issue further practical guidance, tools, and resources implementing the principles in their sector. The UK government has written to several regulators (including the ICO, CMA, Ofcom, and FCA among others) asking them to publish an update outlining their strategic approach to AI by 30 April 2024.
The UK’s sectoral approach to AI-regulation means that organisations need to be attentive to regulatory developments in existing areas of regulatory exposure. As referenced in the consultation response, regulators (including the ICO, CMA and Ofcom) have already taken proactive steps to address AI and implement the principles within their remits.
Despite the UK government appearing set in its strategic direction, a looming election, expected to be called by early 2025, raises the distinct possibility that a future government may revisit key issues.