This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 3 minutes read

EU EMA proposes risk-based approach to AI in pharma lifecycle

The European Medicines Agency (EMA) has published a draft Reflection Paper on the use of AI and machine learning (ML) in the medicinal product lifecycle, and opened a public consultation until the end of the year.

Acknowledging the rapid development in the use of AI and ML in the lifecycle of medicinal products, in its report the EMA reflects on the scientific principles that are relevant for regulatory evaluation of these emerging technologies “to support safe and effective development and use of medicines”. The EMA considers it crucial to identify aspects of AI/machine learning that would fall within its remit or that of EU member states’ competent authorities.

The Reflection Paper is focussed on human medicinal products rather than medical devices, though the EMA acknowledges that devices with AI/ML can be combined with a medicinal product and/or be used to generate evidence in the context of a clinical trial in support of a marketing authorisation (MA) and will be within remit therefore to a degree.

A risk-based approach

The EMA advocates a risk-based approach. It emphasises that the use of AI/ML in the medicinal product  lifecycle should always comply with the existing legal requirements, consider ethics, and ensure due respect of fundamental rights. It promises further advice on risk management in future regulatory guidance.

The EMA points out that the very nature of these technologies leads to new risks, for example, related to a lack of transparency in models and in the potential for bias from the underlying data on which they rely. The level of risk depends on the technology itself, the context of use, the degree of influence exerted by the technology and the stage of the life cycle. The Reflection Paper is structured around the medicinal product lifecycle and outlines specific applications, risks and considerations relevant to AI/ML across each stage – from drug discovery and development, authorisation, through to post-authorisation settings.

Channelling responsibility to the MA applicant or MAH

A key principle according to the EMA, and which comes out strongly in this Reflection Paper, is that that of channelling responsibility to the MA applicant or holder (MAH) to plan for and systematically manage risks, and to ensure that all algorithms, models and datasets are fit for purpose and in line with applicable standards (ethical, technical, scientific, and regulatory, and with EMA scientific guidelines). The EMA cautions that in the context of medicinal products, it may be necessary to be held to stricter requirements than what would be “standard practice” in the field of data science. It proposes that the MA applicant or MAH must, on request, provide sufficient technical details to enable comprehensive assessment of any AI/ML systems.


The EMA recommends a “human-centric” approach to guide all development and deployment of AI and ML with respect to medicinal products, which is aligned with the European Commission’s package of AI-related proposals, including the proposed draft AI Act and draft AI Liability Directive (AILD) currently being negotiated by EU policymakers (see below).

Other recommendations include:

  • carrying out a regulatory impact and risk analysis for the use of AI/ML - the higher the impact or risk, the sooner it is recommended to engage with regulators/seek scientific advice
  • maintaining independence of training, validation and test data sets
  • adopting measures to limit bias in AI/ML, documenting the source of data and the process of acquisition in a traceable manner in line with GxP
  • developing and using generalizable and robust models
  • following ethical principles defined in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI), and conducting early systematic impact analysis for each project
  • implementing robust governance, data protection and data integrity measures

Comment and next steps 

  • Interested stakeholders are invited to comment by 31 December 2023, and are asked to identify opportunities and risks related to AI/ML. The topic will be further discussed during a joint HMA (Heads of Medicines)/ EMA workshop scheduled for 20-21 November 2023.
  • Following the consultation period, the EMA intends to finalise the Reflection Paper, to provide additional guidance on risk-management, and to update existing guidance to address AI/ML-specific issues.
  • This Reflection Paper is timely given that, in parallel, EU policymakers are planning to reach an agreement on the first ever Regulation on AI – the so-called AI Act – by the end of this year. The Spanish Presidency of the Council of the EU has made this file its “top digital priority” and the legislation could potentially serve as a blueprint for other jurisdictions.
  • While the EMA is not officially involved in these EU inter-institutional negotiations (Trilogues), this Reflection Paper will no doubt be under close scrutiny by EU policymakers engaged in negotiating the AI Act (as well as for the related AILD negotiations concerning the harmonisation of EU civil liability rules related to AI which will follow).
  • Similarly, in the U.S., the Food and Drug Administration (FDA) recently published a discussion paper requesting input from pharmaceutical industry stakeholders on the use of AI/ML in drug development and manufacturing.  The discussion paper highlights the importance of a risk-based approach for the adoption of AI/ML, particularly in light of the risks of bias in data used to train AI/ML algorithms. As a follow-up to the discussion paper, the FDA is planning a workshop to discuss how regulators and innovators can work together to realize the potential of AI/ML for product development while being aware of potential challenges.


ai, life sciences, product liability