This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 9 minute read

UK explains how it will regulate AI

The UK government has published a white paper outlining its approach to the regulation of artificial intelligence (AI). The white paper reflects that AI is unlocking huge opportunities but, in some cases, holds the potential to create a range of new or accelerated risks.

The publication of the white paper is particularly timely given that generative AI tools have taken the world by storm recently and driven a wave of demand. See our blog post Generative AI: Five things for lawyers to consider.  

The white paper takes a broad view of what AI is, with the intention that its proposals will be ‘future-proof’ by covering all AI technologies that are both ‘adaptable’ and ‘autonomous’. Therefore, the white paper is relevant to the newer AI technologies that have been hitting the headlines recently as well as many AI solutions that are already widely-used (eg customer service chatbots).

In summary, the white paper reflects the UK government’s desire to establish a nimble and light-touch governance for AI. Key aspects of the proposals include:

  • The establishment of a regulatory framework comprised of five overarching principles for all relevant existing UK regulators to apply in relation to AI.
  • A limited role for government in centralised coordination & monitoring of the new regulatory framework.
  • No further new generally applicable AI laws.
  • Related initiatives, including proposals designed to promote tools for trustworthy AI, introduce regulatory sandboxes, address capability gaps within regulators and work with international partners.

The UK’s approach of relying largely on existing laws and regulations significantly diverges from the approach being taken by the EU, which is planning to introduce new generally applicable AI laws.

The remainder of this article explains the above aspects in more detail, the background and related next steps, including regarding the public consultation the government has launched on the white paper.

Background

The UK is a leader in the European and global AI landscape. The UK has a thriving AI ecosystem and a ten-year National AI Strategy that aims to maintain the UK’s position as a ‘global AI superpower’. Britain is home to twice as many companies providing AI products and services as any other European country.

In July 2022, the UK’s Department for Digital, Culture, Media and Sport (‘DCMS’) published an AI policy paper outlining the UK government’s plans for regulating AI. Feedback was requested ahead of a more detailed white paper on AI regulation that the government initially hoped to publish later in 2022. For our analysis of that 2022 policy paper see here.

There have been significant political changes since the 2022 policy paper, including a new Prime Minister. Responsibility for AI regulation has moved from DCMS to the recently established Department for Science, Innovation and Technology (DSIT). The UK also now has the output of Sir Patrick Vallance’s Regulation for Innovation report, which is heavily referenced in the AI white paper. Although political turnover delayed the AI white paper from 2022 to 2023, the key aspects of the UK’s approach to AI regulation remain largely consistent with the direction outlined (two Prime Ministers back) in 2022.

What are the key elements of the government’s plans? 

Pro-innovative, proportionate approach 

Despite their benefits, the white paper explains that AI systems can create or exacerbate a wide variety of risks and public concerns. This includes, for example, the risks of relying on AI in the provision of critical infrastructure, the risks that AI trained on biased data may embed bias or discrimination into systems and processes, risks to mental or physical safety, and risks to human rights and privacy.

The white paper reflects the UK’s desire to establish a nimble and light-touch regulatory framework to manage such risks. There are three main objectives that the UK aims to achieve from this regulatory framework: (1) to drive growth and prosperity in the UK by encouraging new innovations; (2) to increase public trust in AI by effectively addressing any AI risks; and (3) to strengthen the UK’s position as a global leader in AI.

To achieve these objectives, the regulatory regime proposed in the white paper seeks to adopt an approach that is ‘pro-innovative’, ‘proportionate’, ‘trustworthy’, ’adaptable’, ‘clear’ and ‘collaborative’.

Delegating responsibility to existing regulators

In its press release, the UK government states it ‘will avoid heavy-handed legislation which could stifle innovation and [instead] take an adaptable approach to regulating AI.’ It proposes to do this by relying on existing regulators and regulatory structures (eg, those applicable to financial services or data protection) rather than establishing broadly applicable AI-specific regulations or a dedicated AI regulator. Forthcoming regulation in some sectors, such as medical devices, are still expected to have specific provisions for use of AI.

This approach is also seen by the government as pro-competitive. By minimising regulatory burdens, the government is seeking to ensure that smaller businesses with fewer resources are not disproportionately affected by the regulatory framework.

Five key principles

The government proposes five overarching principles for all UK regulators to apply in relation to AI to ensure common challenges are approached in a coherent, streamlined and context-specific way.

  • Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed.
  • Transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI.
  • Fairness: including that AI is used in compliance with the UK’s existing laws, for example equalities and data protection laws, and does not discriminate against individuals or create unfair or anticompetitive commercial outcomes.
  • Accountability and governance: to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes.
  • Contestability and redress: providing people with clear routes to dispute harmful outcomes or decisions generated by AI.

Regulators will be expected to interpret, prioritise and implement those principles within their sectors and domains proportionately to address the risks posed by AI, in accordance with existing laws and regulatory remits. The government’s intent appears to be that regulators should consider lighter touch options (eg guidance or voluntary measures) and focus on high-risk uses of AI in the first instance.

The government will not put these principles on a statutory footing initially. However, following an initial period of implementation, the government does anticipate introducing a statutory duty on regulators to have due regard to the principles, unless experience shows that there is no need to legislate.

Centralised coordination & monitoring

The 2022 policy paper suggested the need for a small coordination function within the regulatory architecture. Following feedback, the government is now proposing to make that more extensive with the following central support functions to be provided from within government:

  • Monitoring and evaluation of the overall regulatory framework’s effectiveness and the implementation of the principles.
  • Assessing and monitoring risks across the economy arising from AI.
  • Conducting horizon scanning and gap analysis to inform a coherent response to emerging AI technology trends.
  • Supporting testbeds and sandbox initiatives.
  • Providing education and awareness.
  • Promoting interoperability with international regulatory frameworks.
  • The government will also give guidance to regulators on how to implement the principles.

Assuming the UK’s approach remains unchanged, businesses can expect to see further regulatory guidance interpreting and implementing the cross-sectoral principles from a host of regulators over the next year, including the Information Commissioner’s Office (ICO), the Office of Communications (Ofcom), the Competition and Markets Authority (CMA), Medicines and Healthcare products Regulatory Agency (MHRA), Equality and Human Rights Commission and the Financial Conduct Authority (FCA).

The white paper acknowledges that some of these regulators have already begun to grapple with how the use of AI needs to be regulated in their sector, particularly where forthcoming updates to regulatory regimes have created an opportunity to clarify how the rules will apply to AI. For example, the MHRA has already published a roadmap setting out guidance on the requirements for AI when used in medical devices, as part of the reforms being discussed to the medical devices regulatory regime as a whole.

The Bank of England, Prudential Regulation Authority (PRA) and FCA recently published a discussion paper addressing artificial intelligence and machine learning in financial services. The ICO has also published a significant amount of guidance on AI. It is anticipated that updated data protection guidance will be needed following the implementation of recently announced data protection reforms designed to help empower organisations to implement automated decision-making in additional scenarios. For further information on those planned reforms, see our blog post: UK announces data law reforms – third time’s the charm? 

The UK government believes that its approach is more flexible and future proofed, and less likely to unduly stifle innovation, than a single framework with a fixed, central list of risks and mitigations. This creates an interesting contrast with the approach taken by the EU.

Divergence from the EU

The EU is proposing to introduce new legislation covering AI generally, which may be finalised in 2023. The EU’s AI Act will categorise various AI applications based on their use case and perceived risk, as well as mandating additional sector agnostic obligations.

Under the EU’s AI Act, specific applications of AI technology (eg, social scoring, critical infrastructure, grading in educational settings or medical devices and many others) would be either subject to detailed specific obligations (eg, regarding security, reliability, information provision, human oversight and certification) or prohibited. For further information on the AI Act, see here.

There is an ongoing debate within the EU on whether its AI Act should be regulated by existing national data protection authorities or by a new centralised body.

Other aspects of the proposals

Other more detailed aspects of the UK government’s proposals include:

Building on existing regimes for cooperation between regulators and intervening ‘in a proportionate way’ to address regulatory uncertainty and gaps. For example, the existing Digital Regulation Cooperation Forum provides a framework in which the ICO, FCA, CMA and Ofcom are already collaborating on aspects of AI. However, the white paper provides little detail of this crucial element of the government’s plans and the extent to which regulators will apply the key principles listed above in a manner that is consistent across sectors remains an open question.

  • Promoting tools for trustworthy AI (such as assurance techniques, voluntary guidance and technical standards).
  • Regulatory sandboxes and testbeds to help businesses test AI rules before going out to market.
  • Exploring options for addressing capability gaps within individual regulators and across the wider regulatory landscape.
  • Launching a portfolio of AI assurance techniques in Spring 2023.
  • Working closely with international partners to both learn from, and influence, regulatory and non-regulatory developments.

Liability for the use of AI

The UK government is not proposing to intervene and make changes to AI accountability and liability regimes at this stage. Once again this contrasts with the EU, which has proposed new law to address liabilities for harms that may arise from the use of AI. The government has asked for views on the adequacy of existing routes to redress for harms caused by AI in its consultation on the white paper.

Throughout the white paper, there are acknowledgements of certain areas where the lack of clarity around liability may prove to be an issue as AI use increases, and a watchful brief is advised, with the door apparently being left open for discussion around specific legislation in the future. For example, the case study on automated healthcare triage systems notes that there is ‘unclear liability’ if such a system provides incorrect medical advice, which may affect the patient’s ability to seek redress.

What are the next steps?

The government has launched a public consultation on the white paper. Responses should be submitted in accordance with the instructions here by 21 June 2023. It is anticipated that the government will issue its response to the consultation by the end of October 2023.

Companies should consider how the UK’s proposed approach to AI regulation may impact them, and whether they wish to feedback on the consultation.

Over the next twelve months, regulators will start to issue practical guidance to organisations, as well as other tools and resources, to set out how to implement these principles in their sector.

Countries around the world are beginning to draft AI specific rules. AI is a sector where the UK is forging a very different approach to the EU and many other jurisdictions post-Brexit. It remains to be seen whether the UK’s approach will be favoured by industry, market forces or the public and the extent to which the UK approach will have global resonance.

It may be that EU’s draft AI laws will set a global standard such that UK and global companies will often find it necessary or desirable to comply with in any event. Businesses operating in the AI space will be watching to see if the EU’s approach will open the floodgates for other jurisdictions to introduce AI-specific regulation modelled on the EU’s forthcoming AI laws.

Tags

gdpr, ai, data, data protection, ico, tech media and telecoms, europe, product liability, eu digital strategy, eu ai act