This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minute read

EU AI Act unpacked #6: Fundamental rights impact assessment

In the sixth part of our EU AI Act unpacked blog series, we take a look at the fundamental rights impact assessment (FRIA) under Article 27 of the EU AI Act (AI Act). It is essential for organisations using AI systems to understand the scope and requirements of this obligation to determine whether and to what extent they have to perform such FRIAs. 

In order to pursue a human-centric approach to AI and ensure that fundamental rights – such as human dignity or non-discrimination – are protected, certain deployers of high-risk AI systems must carry out a FRIA before putting the high-risk AI system into use. By way of the FRIA, deployers are supposed to identify the specific risks of the AI system to the rights of individuals or groups of individuals likely to be affected and identify appropriate measures if these risks materialise. Further, concerned deployers must notify the supervisory authority about the results of the FRIA.

The requirement to conduct a FRIA seems to be inspired by the requirement to conduct a so-called data protection impact assessment (DPIA) under the EU General Data Protection Regulation (GDPR). Therefore, there are some similarities but also significant differences between these instruments that are worth taking a closer look at. More generally, while a DPIA specifically focuses on how risks in relation to personal data are mitigated, a FRIA takes a broader view, assessing not only the impact on data privacy, but a wider range of fundamental rights, such as freedom of expression, access to justice or right to good administration.  

Scope of application

The obligation to conduct a FRIA is applicable to certain groups of deployers of specific high-risk AI systems (see our blog post on the classification of AI systems).

First, the obligation applies to deployers that are bodies governed by public law or deployers that are private entities providing public services. They must conduct a FRIA regarding a large subset of high-risk AI systems. This covers in particular all high-risk AI systems listed in Annex III of the AI Act excluding those linked to critical infrastructure, ie those used as safety components in the management and operation of critical digital infrastructure, road traffic, the supply of water, gas, heating or electricity. 

Second, deployers of AI systems evaluating the creditworthiness, establishing a credit score or used for risk assessment and pricing regarding life and health insurance(referred to in no. 5(b) and (c) of Annex III) must also perform a FRIA. 

Performing a fundamental rights impact assessment

Similar to a DPIA, a FRIA must be performed before the first use of the high-risk AI system and must be updated when the deployer considers that any of the relevant factors have changed or are not up-to-date anymore. In similar cases, the deployer can rely on previously conducted FRIAs or existing impact assessments carried out by the provider. 

The assessment must include a description of the deployer’s processes in which the high-risk AI system will be used, the time and frequency in which the high-risk AI system is intended to be used, the categories of natural persons and groups likely to be affected by its use in the specific context, the specific risks of harm likely to impact the affected categories of persons or group of persons, the implementation of human oversight measures and measures to be taken if the risks materialise.

For conducting the FRIA in a structured way, organisations might consider taking the following steps:

  1. Risk identification: Thorough analysis of potential risks to fundamental rights that may arise from the use of the high-risk AI system, including discrimination, privacy infringements and restrictions on freedom of expression.
  2. Impact assessment: Assessing and quantifying the specific impact on the fundamental rights of the affected individuals or groups of persons.
  3. Mitigation measures: Defining and implementing appropriate mitigation measuresbased on the risk identification and impact assessment to minimise negative impacts on fundamental rights (e.g. arrangements for human oversight according to the instructions of use).
  4. Documentation and transparency: Documenting the entire process and ensuring transparency to the affected individuals and supervisory authorities.

Regarding the relationship between FRIAs and DPIAs it is important to bear in mind that if a high-risk AI system requires the performance of both a FRIA and a DPIA, they can be carried out within one assessment addressing the relevant aspects under the AI Act as well as the GDPR. If a DPIA has already been conducted, a FRIA must complement the DPIA (Article 27(4) AI Act).

Organisations should also be aware that they are not only required to conduct and document FRIAs, but that they must also notify the competent market surveillance about the results of the respective FRIA except for a limited number of cases. In this regard, the AI Office will publish a template that deployers are required to complete and submit to the market surveillance office.

Key takeaways

  • Many organisations in public and private sectors deploying high-risk AI systems will be required to perform a FRIA. The FRIA is supposed to enable them to identify potential risks on fundamental rights early and take appropriate measures to mitigate them.
  • The requirement to conduct a FRIA specifically relates to high-risk AI systems used in sectors such as healthcare, finance or insurance, where the legislator has found that the potential impact of an AI system on fundamental rights can be even more significant than in other areas.
  • There are similarities and potential links between FRIAs and DPIAs under the GDPR. Both reflect the growing importance for organisations of performing and maintaining documented assessments for both fundamental rights risks and specific privacy risks.
  • The concerned organisations are required to notify the market surveillance authority about the results of the FRIA by submitting the filled-out template from the AI Office.
  • FRIAs must be performed before the first use of the high-risk AI system. Nonetheless, they need to be understood as a continuous process, regularly reviewed und updated throughout the lifecycle of a high-risk AI system. 

What’s next?

In our next blog, we will explore what businesses need to consider if they use a general-purpose AI (GPAI) model.

Tags

ai, eu ai act, eu ai act series