How can stakeholders benefit from synergies when implementing their AI Act compliance framework? This part of our EU AI Act unpacked blog series explores how the AI Act interacts with provisions of the Digital Services Act (DSA) which are particularly relevant in the platform context – together with data protection considerations which will be the spotlight of a separate post.
The AI Act is not a stand-alone regulation; it forms part of a broader European action plan. Since the publication of its digital agenda in February 2020, the European Commission (EC) has been committed to shaping Europe’s ‘Digital Decade’, demonstrated inter alia by the entry-into-force of groundbreaking regulatory acts, such as the AI Act and the DSA. Naturally, questions arise about their relationship, overlaps, contradictions, and, most importantly, potential synergy effects.
AI Act vs DSA
While the AI Act and the DSA have distinct scopes, there are also commonalities. The AI Act applies to stakeholders along the AI value chain, regulating AI systems. The DSA addresses online platforms’ responsibilities, including content moderation, transparency, and user rights. Despite these differences, the AI Act and the DSA do complement each other in some instances:
- Recital 119 of the AI Act acknowledges that AI systems may be provided as intermediary services (or parts thereof) within the meaning of the DSA. Further, Article 2(5) AI Act states that the liability of intermediary services providers under the DSA remains unaffected.
- According to Article 2(4) and Recital 10 DSA, the DSA should be without prejudice to other acts of Union law (i) regulating the provision of information society services in general, (ii) regulating other aspects of the provision of intermediary services in the internal market, or (iii) specifying and complementing the harmonised rules set out in the DSA.
Combining AI Act and DSA compliance efforts
The AI Act requires transparency, accountability, and safety in AI deployment. The DSA provides a layered set of obligations tailored to different categories of digital services, with the most stringent set of rules applying to the designated very large online platforms and search engines (VLOPs/VLOSEs), including transparency, risk assessments and risk mitigation. Thus, platform providers can achieve synergies by integrating AI compliance efforts with existing DSA processes, for instance in the context of risk management and systemic risk assessments, and transparency obligations.
Risk management
Providers and deployers of high-risk AI systems are obligated to implement a risk management system. Article 9(10) AI Act states that this risk management can be integrated with risk management procedures established pursuant to other EU law, including the DSA.
Recital 118 of the AI Act further clarifies that AI systems or models that are embedded into designated VLOPs/VLOSEs are subject to the risk-management framework provided for in the DSA. Consequently, the corresponding obligations of the AI Act should be presumed to be fulfilled if the DSA provisions are complied with, unless significant systemic risks not covered by the DSA emerge and are identified in such models.
Systemic risk assessment
Article 34(1) DSA requires providers of VLOPs/VLOSEs to identify, analyse and assess any systemic risks that stem from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services. This includes any AI systems or models deployed on or as part of the platform or search engine, as applicable, i.e. such systems need to be integrated in the DSA risk assessment. Any identified risks must be effectively mitigated (Article 35 DSA).
Another point to consider here is timing: While the DSA mandates yearly or ad-hoc systemic risk assessments, the AI Act requires continuous risk assessment before placing AI models on the market.
Transparency obligations
The AI Act obliges providers and deployers of AI systems to enable the detection and disclosure of artificially generated or manipulated system outputs. According to Recitals 120, 136 AI Act, these transparency obligations are vital for effective DSA implementation. This applies in particular to the obligation of VLOPs/VLOSEs to identify and mitigate systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, such as effects on democratic processes, civic discourse, and electoral processes, including through disinformation.
Enforcement
Most of the AI Act provisions will become enforceable in two years. However, the EC is already focusing on AI when enforcing the DSA. The DSA practice is likely going to be decisive for future AI enforcement under the AI Act, but also in the privacy and antitrust space where regulators are homing in on AI practices.
Companies that are now preparing for the AI Act should therefore ensure that they leverage existing compliance mechanisms as much as possible to avoid duplication of work and to have consistent frameworks - not only with regard to the DSA, but also to GDPR and other applicable AI regulations.