This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minutes read

UK MHRA sets out its strategy on AI and the regulation of medical products

The UK Medicines and Healthcare products Regulatory Agency (MHRA) has published its AI Strategy (the Strategy) in ‘Impact of AI on the regulation of medical products’. The paper outlines the MHRA’s intended strategic approach to AI in the field of medicine and science, and the steps it is taking to meet the UK government’s expectations. Here we analyse the Strategy and put it into context, including developments since its release and cross-regulatory and international observations. 

Background – the UK government on the planned approach to AI regulation

With the Strategy, the MHRA provides an update on the work it has undertaken since the publication of the UK government’s 2023 White Paper, ‘A pro-innovation approach to AI regulation’ (see our earlier commentary on the White Paper). The Strategy responds specifically to a letter from the relevant Secretaries of State in February this year, which asked the MHRA to outline its strategic approach to AI and the steps it was taking in line with the expectations set out in the White Paper. 

As a brief recap, the White Paper outlined the UK’s planned approach to the regulation of AI, with a plan to establish ‘nimble’ and ‘light-touch’ governance, including a regulatory framework comprised of Five Key Principles for all relevant UK sector regulators (including the MHRA) to apply:

  • Principle 1. Safety, security and robustness: ‘AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed.’
  • Principle 2. Appropriate transparency and explainability: ‘AI systems should be appropriately transparent and explainable.’
  • Principle 3. Fairness: ‘AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes.’ As highlighted by the recent publication of the Independent Review of Equity in Medical Devices, fairness and equity in medical devices has been a recent focus for the UK government (see our earlier commentary).
  • Principle 4. Accountability and governance: ‘Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.’
  • Principle 5. Contestability and redress: ‘Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.’

The Strategy and the MHRA’s view of its role in AI regulation

In the Strategy, the MHRA considers its role with respect to the opportunities and risks presented by AI from three perspectives: 

1. As a regulator of AI products

The MHRA references the current major programme underway to overhaul the medical devices regulatory landscape in the UK (see our further analysis). Under the proposed new rules, the MHRA anticipates that many AI products, which are currently in the lowest risk classification (and therefore can be placed on the market without independent assessment of conformity), are likely to be 'up-classified' in terms of their risk classification, meaning they will require greater scrutiny throughout the product lifecycle. 

At the same time, the MHRA says it is mindful of the need for a proportionate approach, and that its intention is to use ‘principles supplemented by guidance to avoid constraining innovation’. 

The Strategy discusses particular challenges posed by the principles of transparency, explainability and fairness in the context of AI as a medical device (AIaMD). In particular, it notes the 'key risk' of the human/device interface (with further detailed guidance anticipated in spring 2025), and challenges of fairness highlighted by Dame Margaret Whitehead in her recent Independent Review into the equity of medical devices in the UK (see our further analysis). 

2. As a public service organisation delivering time-critical decisions

The MHRA views AI as presenting an opportunity to improve the efficiency of its services across all regulatory functions, which it hopes can lead to earlier access to medical products. 

The MHRA recognises that it is ‘early in [its] journey’ of discovering the potential of AI, and that it needs to invest in getting 'the basics right'. One of the first plans is to develop an MHRA data strategy to incorporate the safe and responsible application of advanced analytics and AI within the organisation, including large language models and generative AI. One application discussed is the use of supervised machine learning to carry out initial assessments of documents submitted for marketing authorisations.

3. As an organisation that makes evidence-based decisions that impact public and patient safety

The MHRA expects AI to feature increasingly in how those regulated by the MHRA undertake their activities and generate evidence (eg the use of AI for vigilance purposes, impacts on clinical trial design and the pace of new medicine development). While it does not expect such developments will necessarily change how it regulates or the questions it needs to ask to determine safety, the MHRA acknowledges that it does need to ensure it fully understands the impact of such changes in the sector to be able to regulate effectively. 

Comment

  • The Strategy sets out a relatively ambitious development programme with detailed guidance on multiple related topics promised in the coming months (including on cybersecurity by spring 2025). It remains to be seen how far regulation of AI will form part of future regulations on medicines and medical devices, and what will be contained in softer principles and guidance. Since the Strategy was published, MHRA AI-related guidance output has continued, for example with further guiding principles to ensure transparency in machine learning (ML) medical devices.
  • Consistent with a more general and recent move towards international recognition (see this press release from the MHRA, for example), the Strategy has a clear emphasis on international collaboration with respect to AIaMD, noting in the context of regulatory reform proposals that 'international alignment is critical for businesses that operate in a global environment. We expect this will be welcomed by the industry. 
  • Shortly after publishing the Strategy, the MHRA launched the AI Airlock, a regulatory sandbox designed in collaboration with regulators, manufacturers and other relevant stakeholders to develop understanding and solutions in response to novel regulatory challenges for AIaMD.  The sandbox will open for applications later in the summer and will support up to six virtual or real-world projects through simulation to test a range of regulatory challenges applicable to AIaMD used for clinical purposes within the NHS.
  • As the MedTech industry continues to assess the opportunities and risks of AI in healthcare, regulators around the world are ramping up efforts to regulate the development and use of the technology. In October of last year, the US Food and Drug Administration announced the formation of a Digital Health Advisory Committee to provide expertise on the scientific and technical issues related to digital health technologies, including AI/ML. More recently, the European Medicines Agency and the Heads of Medicines Agencies published a workplan to guide the use of AI in medicines regulation until 2028. Finally, the EU AI Act, which gained final approval from the Council of the EU last month, will likely have a significant impact for all life sciences and healthcare companies operating in the EU as they leverage AI across their business operations.
  • Cross-regulatory issues are also likely to need increasing focus – for example, where medical devices and/or AIaMD involve the use of personal data in their development and operation. As the technology advances, there is likely to be an increasing need for cross-regulatory approaches to allow regulators to facilitate the White Paper’s key principles of safety, transparency and accountability whilst balancing them against aims to avoid constraining innovation. 
  • Finally, of course, with the upcoming UK General Election on 4 July 2024 the impact of a potential change in Government must be considered. The Labour Party manifesto, for example, promises an industrial strategy that (amongst other things) 'supports the development of the [AI] sector' and says that reforms will ‘harness the power of…AI in the NHS’, while also proposing ‘binding regulation on the handful of companies developing the most powerful AI models’. See further our UK Election Manifesto Comparison.

Tags

ai, innovation, life sciences