This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minutes read

EU AI Act unpacked #5: Key governance obligations in relation to high-risk AI systems

In the previous post of our blog series, we delved into the timeline and implementation periods of the EU AI Act (AI Act). In this blog post, we take a closer look at the obligations of providers and other stakeholders in the AI Act value chain regarding high-risk AI systems. We will also provide first insights on how to stay ahead of the AI compliance curve.

Overview of obligations for providers of high-risk AI systems

The AI Act introduces a set of obligations for providers of high-risk AI systems which include the following: 

  • establishing, implementing, documenting and maintaining a risk management system (Article 9 AI Act);
  • effective data governance (Article 10 AI Act);
  • producing technical documentation to demonstrate compliance with the AI Act (Article 11 AI Act);
  • allowing for the automatic recording of events (logs) (Article 12 AI Act);
  • ensuring sufficient transparency and effective human oversight (Article 13, 14 AI Act);
  • designing and deploying the AI system in a way to achieve appropriate accuracy, robustness and cybersecurity (Article 15 AI Act); and
  • implementing a quality management system and keeping documentation (Article 17, 18 AI Act).

Risk management system

Providers of high-risk AI systems must implement, document, and maintain a quality management system. This system is supposed to ensure that high-risk AI systems are designed, developed, and deployed in compliance with the AI Act. This includes conformity assessment procedures, technical standards and, in particular, establishing a risk management system. The AI Act defines ‘risk management’ as an iterative process that must be active throughout the entire lifecycle of the high-risk AI system, requiring systematic reviews and updates. 

The risk management process under the AI Act requires that the provider meets general risk management objectives on a case-by-case basis: 

  • identifying known and reasonably foreseeable risks to health, safety or fundamental rights under normal use and reasonably foreseeable misuse; 
  • considering additional risks from post-market monitoring data; and 
  • implementing appropriate and targeted measures to address identified risks. 

Providers will need to align their new governance obligations under the AI Act with existing other regulatory requirements that apply to them, such as data protection laws and sector-specific regulations. Leveraging synergies between these frameworks will be critical to streamline compliance processes, ensure cohesive governance structures and thereby improve efficiency. For example, in their efforts to mitigate risks to an acceptable level and ensure awareness, organizations may utilize their data protection policies referring to principles such as data protection by design and default. 

Data governance

The quality of an AI system's results is mainly determined by the quality of its training data. Therefore, the AI Act mandates providers of high-risk AI systems to use quality data sets for the training, validation and testing of high-risk AI systems. These data sets must be subject to appropriate data governance and management practices and contain information that is relevant, sufficiently representative and, to the best extent possible, free of errors and complete. To ensure quality training data, providers may have to invest significant resources in the preparation and continuous examination of the data used for the training of the AI system.

A key concern for the use of AI is the amplification of biases. Therefore, providers of high-risk AI systems are obliged to take appropriate measures to detect, prevent and mitigate possible biases. In order to comply with their obligations under the GDPR while at the same time taking measures against biases, providers will be able to rely on an exceptional permission to process special categories of personal data, such as data on ethnic origin and health, which is subject to strict limitations under the GDPR.

Transparency and human oversight

‘Putting the human in the loop’ is a core principle for the development of safe and trustworthy AI. To enable effective human control and oversight, the AI Act includes stringent documentation and transparency obligations. For every high-risk AI system, providers are required to produce technical documentation before that system is placed on the market. Further, they must implement certain logging capabilities. 

The extensive transparency requirements under the AI Act range from instructions for safe use to information about the level of accuracy, robustness, and cybersecurity of the high-risk AI system, including (where applicable) further information to enable deployers to interpret the system’s output. Again, it is up to providers to determine how to give information that is relevant, accessible and comprehensible to deployers of their high-risk AI systems, and to leverage possible synergies with existing information obligations, including on the processing of personal data.

In addition, the system must be designed to allow for effective human oversight while in use. This includes appropriate human-machine interface tools and a "stop button" type procedure for being able to safely shut down the system. 

Obligations for importers, distributors and deployers of high-risk AI systems

The AI Act applies not only to providers of high-risk AI systems, but also to other stakeholders along the AI value chain, namely importers, distributors and deployers (for more information, see our blog post on the personal scope of the AI Act). While providers of high-risk AI systems will bear the brunt of responsibility under the AI Act, importers, distributors and deployers will face downstream compliance obligations well-known from other types of product safety law. Most importantly, importers and distributors will need to ensure that the high-risk AI system is in conformity with specific provisions of the AI Act. In addition, importers, distributors and deployers must also inform the provider and competent authorities when they identify that the high-risk AI system presents risks to health, safety, and fundamental rights of natural persons.

Key takeaways 

Providers of high-risk AI systems are subject to a broad range of governance obligations under the AI Act. It is important to consider regulatory compliance from the beginning of the design phase and to review compliance continuously along the entire lifecycle of the high-risk AI system. These obligations mainly include implementing a quality management system (including risk management), keeping documentation, ensuring transparency and human oversight, and designing the AI system to achieve appropriate accuracy, robustness, and cybersecurity. A holistic approach to regulatory compliance can leverage synergies with already existing obligations and facilitate fulfilment of the requirements of the AI Act.

What’s next?

In our next blog post, we will focus on the so called fundamental rights impact assessment under the AI Act, a newly introduced internal risk assessment to evaluate and address potential risks that stem from AI.

Tags

ai, eu ai act, eu ai act series