[You can find all episodes of our EU AI Act unpacked blog series by clicking here.]
In this edition of our EU AI Act unpacked blog series, we take a look at the EU AI Act (AI Act) and its implication on the processing of personal data and, in particular, the AI Act’s interplay with the EU General Data Protection Regulation (GDPR).
The AI Act's relationship with data protection laws
The development and use of AI systems and AI models is in most cases closely connected with the processing of personal data. For ensuring consistency, what constitutes ‘personal data’ under the AI Act has the same meaning as under the GDPR (Article 3(50) AI Act).
Further, Article 2(7) AI Act explicitly states that the AI Act is without prejudice to the GDPR: specific references in the AI Act to obligations under the GDPR are for clarification purposes only and do not mean that the GDPR would only apply in these limited cases which are explicitly mentioned in the AI Act (see Articles 50(3), 59(3) AI Act). Therefore, insofar as an AI system or AI model involves the processing of personal data, providers of AI systems and AI models, as well as deployers of AI systems remain obliged to comply with their obligations as (joint) controllers or processors under the GDPR. In particular, as the separation of personal from non-personal data can be technically complex, the likelihood that AI systems process personal data at some point in their life cycle is rather high.
The AI Act's interaction with data protection laws
While the AI Act contains provisions concerning product safety and – with some exceptions – does not provide any rights for individuals, the GDPR grants individuals broad data protection rights. As the GDPR is technology-neutral, the processing of personal data in the context of AI is also covered by the GDPR. Thus, the AI Act and the GDPR are generally intended to work hand in hand.
Still, there is quite some overlap between many provisions of the two laws:
- Human oversight: Both laws include provisions on human oversight. Under Article 22 GDPR, data subjects have the right not to be subject to a decision that is based solely on automated processing. Similarly, the AI Act provides for mandatory human oversight in Article 14 AI Act which mandates that providers of high-risk AI systems must take a ‘human-oversight-by-design’ approach. Consequently, deployers of high-risk AI systems must ensure human oversight for their AI system (see Article 26(1) AI Act). By ensuring compliance with Article 26(1) AI Act, deployers of high-risk AI systems may simultaneously ensure compliance with Article 22 GDPR, as meaningful human oversight as foreseen under Article 26(1) AI Act may also be sufficient to render a decision no longer fully automated for the purposes of Article 22 GDPR.
- Assessments: The AI Act and the GDPR require the performance of certain risk assessments. Article 35 GDPR, on the one hand, mandates the controller to carry out a data protection impact assessment (DPIA) in cases where the type of processing is likely to result in a high risk to the rights and freedoms of natural persons. Under Article 43 AI Act, on the other hand, the provider of a high-risk AI System is required to perform a conformity assessment to show the system's compliance with the AI Act’s requirements. While the two assessments differ in form and purpose, technical information used for either of the two is very likely to be helpful for the preparation of the other assessment. Moreover, certain deployers of high-risk AI systems must perform a fundamental rights impact assessment (FRIA) as laid out in Article 27 AI Act. Similarly to the DPIA, the FRIA’s goal is to identify and mitigate risks arising for the fundamental rights of natural persons. Considerations and governance mechanisms that were made and created for the DPIA, will thus very likely be helpful for the FRIA and vice versa.
- General principles: Both laws are built around general principles which overlap to a certain extent. While the GDPR principles include fairness, transparency, data minimisation, and confidentiality, recital 27 to the AI Act similarly refers to principles such as fairness and transparency but also includes human oversight and diversity. The latter principles were heavily influenced by the first intergovernmental standard on AI, the OECD Recommendation of the Council on Artificial Intelligence, which introduced principles for responsible stewardship of trustworthy AI that were all strongly linked to the principles of the GDPR. Not only does the GDPR require the controller to be able to demonstrate compliance with the GDPR principles at any time (see Article 5(2) GDPR), but the AI Act also refers to the principle of accountability in recital 27.
However, the relationship between the two is not without tension. For example, AI and big data must be used in such a way that they comply with GDPR principles such as data minimisation and purpose limitation. When training an AI model, it is thus crucial that the AI model is only trained with data to the extent that it is necessary for the purpose of the training – additionally, there must be an applicable legal basis for the collection of the data as well as for using them for the purposes of the training. Controllers should also continuously evaluate the development of the AI model and whether the personal data used for the training is necessary.
National competent authorities and the role of data protection authorities
Under the EU AI Act, each Member State is required to designate one or more national competent authorities to supervise the application and implementation of the AI Act, as well as to carry out market surveillance activities and enforce the AI Act. The national competent authorities will be supported by the European Artificial Intelligence Board and the European AI Office.
Most recently, in Germany, for instance, it was announced that the Federal Network Agency (Bundesnetzagentur) shall play a central role in the supervisory structure for the national implementation of the AI Act, whereas in other Member States data protection authorities will be the responsible market surveillance authority. However, in many Member States it is not yet clear who will take on this supervisory role at national level.
Nevertheless, the AI Act provides for the involvement of data protection authorities in data protection matters anyway. Numerous data protection authorities have already commented on the processing of personal data in the context of AI. Data protection authorities, such as the Swedish IMY, the French CNIL, and the German DPA Baden-Württemberg have published guidance on the interplay between the GDPR and the AI Act. While some of them offer more general advice on how providers and deployers of AI systems can comply with GDPR principles and regulations, others, such as the Bavarian DPA, offer concrete checklists of what needs to be assessed when creating AI models.
Key takeaways
- Providers and deployers must ensure compliance with data protection legislation, in particular the GDPR, alongside compliance with the AI Act.
- Certain obligations under the AI Act and the GDPR overlap; for these obligations, providers and deployers should ensure consistency in their compliance approach and documentation.
- To the extent that there are tensions between the GDPR and the use of AI, in particular in relation to the data minimisation principle, organisations should pay particular attention to the selection and processing of personal data and should document their decisions in order to meet accountability requirements.
- Independently of whether data protection authorities are appointed as market surveillance authorities in their Member State, they are very active with publishing guidelines on the handling of personal data in the context of AI which should be closely followed.
What’s next?
In our next blog, we're looking at the draft Code of Practice for GPAI under the AI Act.