The rapid development of artificial intelligence (AI) systems has brought and will continue to bring transformative changes to various industries – this applies especially to the healthcare sector. From diagnostics to personalized treatment plans, AI systems have the potential to revolutionise patient care and improve overall health outcomes.
As these systems continue becoming more sophisticated, concerns regarding their ethical use, safety and transparency are increasing. To address such concerns, in 2021, the EU Commission began the legislative process for an act to regulate AI.
The proposed AI Act aims to ensure the conformity of AI systems with fundamental rights and values. Below, are our thoughts on the implications of the proposed AI Act for healthcare with a particular focus on the definition of “AI systems” and the categorisation of high-risk AI systems.
‘AI system’ as defined in the Commission’s proposal
The definition of an ‘AI system’ will be decisive for determining the AI Act’s scope.
The definition in the Commission’s original proposal is very broad. In that proposal, ‘AI system’ was defined as software developed using machine learning, logic or knowledge-based approaches that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments with which they interact.
From a healthcare perspective, the term encompasses a broad variety of technology-driven systems and offerings such as:
- Medical image analysis tools, natural language processing systems and predictive analytics tools for patient outcomes, which are typically based on machine learning algorithms, as well as
- Business rules management systems and robotic process automation systems, where the rules of the system are typically configured manually by humans.
Therefore, the Commission’s proposed definition would capture a multitude of tech-based tools and systems used in the healthcare sector – with the consequence that developers, distributors and users of such systems would need to comply with the complex compliance framework set out in the AI Act.
Narrower definition sought by the EU Council
The definition of ‘AI system’ and other issues sparked contentious discussions within the Council of the EU, which adopted its provisional position on the AI Act on 6 December 2022.
The Council seeks to narrow the definition of ‘AI system’ by limiting its scope to systems that operate with certain ‘elements of autonomy’ from human involvement. In particular, this would exclude systems that use rules defined solely by natural persons to automatically execute operations.
In addition, the Council seeks to exclude from the scope of the AI Act:
- AI systems that are specifically developed and put into service for the sole purpose of scientific research and development; and
- Any research and development activity regarding AI systems.
Hence, if the Council’s approach prevails, the narrower definition of AI systems would limit the AI Act’s scope resulting in less healthcare software being subject to the AI Act relative to the Commission’s original proposal.
High-risk AI systems in healthcare
The AI Act adopts a risk-based approach by setting up different compliance frameworks for AI systems based on risk categories. The most extensive compliance obligations (including obligations relating to transparency, conformity assessments, human oversight and data security and extensive documentation obligations) are established for so-called high-risk AI systems as specified in the AI Act.
Under the Commission’s original proposal, nearly all AI systems used in healthcare would be classified as high-risk, especially because the Medical Device Regulation (MDR) is listed in the relevant annex of the AI Act and the MDR definition of a ‘medical device’ is very broad. Under the MDR, a ‘medical device’ includes any software intended by the manufacturer to be used for human beings for certain medical purposes.
EU Parliament’s current position on the AI systems
The EU’s Parliament has not yet determined its approach to the AI Act. Based on the deliberations to date, it seems likely the Parliament will adopt a narrower definition of AI systems as compared to the original Commission proposal. According to the most recent information, the Parliament is likely to suggest defining an AI system as a machine-based system that is designed to operate with varying levels of autonomy and that can generate outputs such as predictions, recommendations, or decisions influencing physical or virtual environments. Such a definition could mean the exclusion of rule-based systems set manually by humans. We are tracking whether the Parliament is also going to propose changes to the categorisation of AI systems as high-risk.
Next steps
The Parliament is expected to adopt its position on the AI Act sometime in May 2023. After that, the interinstitutional negotiations (so-called ‘trilogues’) will begin among the European lawmakers to formalise the final version of the AI Act.
Currently, it is expected that the AI Act will take effect by the end of 2023. Once the regulation is adopted, those active in the space will likely have at least two years to prepare for compliance.
Impacted companies based in the EU, or that sell and offer their health-related products and services in the EU, will have to assess:
- whether their products and services qualify as AI systems; and
- if so, whether this product or service is considered as a low-, mid- or high-risk AI system.
As explained above, high-risk AI systems will be particularly strictly regulated under the AI Act and developers, distributors and users will face a variety of obligations. The Freshfields Life Sciences team will continue monitoring developments in this space.