EU study into medical AI highlights the key risks and shortcomings of legal frameworks.

In short

A new study from the European Parliament Parliamentary Research Service examines medical AI and the particular opportunities and risks that it poses. The authors’ clear view is that AI in the healthcare domain poses specific risks that merit separate consideration from the perspective of legal frameworks and allocation of accountability; they find the current and proposed frameworks lacking when it comes to medical AI.

In detail

How will artificial intelligence contribute to the development of life sciences and healthcare in Europe? And what policy options are available to manage the risks that it poses? The European Parliamentary Research Service has recently published an in-depth study – available here – that tries to answer those questions.

The paper, which draws on a ‘comprehensive’ inter-disciplinary literature review, details the ‘great promise’ of AI and its ‘potential to revolutionise the field of health’, including in addressing pressing healthcare issues such as ageing populations, inefficiency of existing health systems and health inequities. It considers that AI could make the most difference in clinical practice, biomedical research, public health and health administration, in particular.

However, the majority of the study is devoted to identifying the risks that the use of AI in healthcare may pose, as well as the vulnerabilities which may result from systems and regulatory frameworks that are not yet prepared for the developments. One of the key themes which emerges is the authors’ view that medical AI poses specific risks and requires its own regulatory framework to address the socio-ethical implications of its use.

Key risks posed by AI in healthcare

The paper identifies seven main risks of AI in medicine and healthcare:

  • Patient harm due to AI errors, with potential life-threatening consequences;
  • Misuse of medical AI tools, including human error by those tasked with using it in practice (eg HCPs);
  • Bias in AI algorithms and the potential for the perpetuation of existing inequalities (see further for a discussion of algorithmic bias in healthcare datasets);
  • Lack of transparency (linked to the concepts of traceability and explainability) leading to a lack of understanding and trust in the AI’s predictions and decisions;
  • Privacy and security issues, including unauthorised personal data sharing, data breaches, and the risk of harmful or even fatal cyber attacks at individual, hospital or health system level;
  • Gaps in algorithmic accountability (see further below); and
  • Obstacles to implementation of AI in real world healthcare settings, including limited data quality, structure and interoperability issues across heterogeneous clinical centres and electronic health records. (Note: It is possible that the recent EU proposals for a European Health Data Space may address some of these concerns – although exactly how and the interplay with other EU instruments such as GDPR remains to be seen. Additionally from a UK perspective, we recently analysed some of the obstacles to unlocking health data following the Goldacre report.) 

Criticisms levelled at existing regulatory frameworks

Although the study is framed in politically neutral terms, it does not pull its punches in highlighting perceived deficiencies in the current EU regulatory framework. Of particular interest are comments concerning gaps in legal algorithmic accountability and shortcomings of existing and proposed regulatory frameworks for risk assessment and management of AI.

Accountability concerns

The authors believe that there are gaps in national and international regulations concerning who should be held accountable for errors or failures of AI systems, especially in medical AI.

The study recommends:

  • The development of frameworks and mechanisms to improve accountability in medical AI, which would assign responsibility adequately to all actors, including manufacturers. In indicating that AI manufacturers need to be held accountable, the authors note that if clinicians think they will be systematically held responsible for all medical errors (consistent with the traditional model of clinician responsibility in healthcare) then they are unlikely to adopt emerging AI solutions.
  • Specific regulation of mobile or web-based AI tools in the commercial medical diagnostics and health monitoring arena, which it considers are particularly vulnerable to misuse and human error.
  • The establishment of regulatory agencies dedicated to the development of the required frameworks for medical AI and for enforcement.

The authors refer to the anticipated reform of EU liability laws expected after the summer break, including not only a revision of the Product Liability Directive, but also a Directive on AI liability. The authors anticipate that this reform will adapt existing liability laws to the challenges of AI to ensure that victims who suffer damage from AI technology are compensated. They say that specific sectoral adjustments of existing regulation may be required for AI in healthcare (see further here for further analysis of these potential reforms more generally).

Criticisms of the current legal risk framework

The report states that the specific risks of medical AI require a structured approach to risk assessment and management that specifically address the challenges, and is critical of the existing and proposed regulatory frameworks covering AI in the EU:

  • With respect to the EU MDR and the IVDR, the authors seem supportive to the extent that these instruments are directly applicable to medical AI. However, it is said that they fall short since they were developed at a time when AI was in the early stages of development and therefore ‘many aspects specific to AI are not considered,’ such as continuous learning of AI models or the identification of algorithmic biases.  
  • The study also considers the proposed AI Regulation. While the authors seems to generally endorse a risk-based approach to regulation of medical AI (the general approach of the AI Regulation), they consider that it ‘does not take into account the specificities and risks of AI in the healthcare domain’ and suffers from some of the same limitations of the MDR and IVDR such as a ‘lack of mechanisms to address the dynamic nature and continuous learning of medical AI technologies’ (see further analysis of the proposed EU AI Regulation more generally).

The paper also sets out other ‘policy options,’ including:

  • creation of an AI Passport for standardisation and to enable traceability across countries and healthcare organisations;
  • promoting research into clinical, ethical and technical robustness in medical AI, including to improve explainability, interoperability, and bias mitigation; and
  • implementation of a strategy to reduce the European divide in medical AI in terms of medical inequality.

Key takeaways 

  • It remains to be seen how lawmakers will tackle the issues raised by this study in the field of medical AI, including comments as to gaps in accountability and shortcomings in the legal frameworks, and whether the recommendations will have any impact on the many EU level reforms anticipated in the year ahead.
  • This study will also be of interest to regulators in jurisdictions which have not yet outlined in detail their AI regulation proposals (medical or otherwise), since many of the comments are of universal application in principle.
  • Recommendations relating to ensuring that accountability is channelled to manufacturers, as well as other actors, will be of particular interest to the AI industry (although will not come as a surprise).