This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 6 minute read

EU AI Act unpacked #12: International soft law approaches to regulate AI

In the twelfth part of our EU AI Act unpacked blog series, we take a look at some of the most well-known international soft law approaches to regulate AI.

The EU AI Act (AI Act) is the world’s first major comprehensive legislation regulating AI. Beyond the AI Act, many non-binding frameworks on AI exist. Some of the best known are the ISO/IEC 42001 standard (ISO 42001), the NIST Artificial Intelligence Risk Management Framework (NIST AI RMF), the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (G7 Code of conduct) and the OECD Council Recommendation on Artificial Intelligence (OECD AI Principles) and in the US, major AI developers have agreed to the White House Commitments. 

Voluntarily adhering to such frameworks can be an effective way to demonstrate commitment to the responsible development and use of AI or be a competitive advantage in the value chain, and many businesses are considering if and how they align their compliance and governance efforts with such frameworks. In this context, the EC is promoting the so-called ‘AI Pact’, which seeks the industry’s voluntary commitment to start implementing the requirements of the AI Act ahead of legal deadlines.

This blog post provides a brief overview of these international approaches to AI and how they relate to the AI Act.

1. OECD AI Principles 

The OECD AI Principles, published in 2019 by the OECD Council, have been the first international framework on AI. They aim at promoting responsible stewardship of trustworthy AI and provide a set of five principles companies may implement, as well as five recommendations addressed to governments, aimed at promoting responsible stewardship of trustworthy AI. By adhering to these principles, companies commit to, for example, transparency and responsible disclosure regarding AI systems by providing meaningful context-based information, respect the rule of law, human rights and democratic values, and to proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet. National governmental approaches to AI can be found in the 2023 report on the state of implementation of the OECD AI principles. 

2. G7 Code of conduct

Building on the OECD AI Principles, the G7 countries agreed in 2023 on an international code of conduct aimed at promoting safe and trustworthy AI worldwide, the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems, or ‘G7 Code of conduct’. This code of conduct provides voluntary guidance in form of eleven so-called “actions”. Organisations should, for example, take appropriate measures to identify, evaluate, and mitigate risks across the AI lifecycle, develop, implement and disclose AI governance and risk management policies, and prioritise the development of advanced AI systems to address the world’s greatest challenges. 

3. NIST AI RMF

The NIST AI RMF is a voluntary, non-sector-specific, and use-case agnostic framework focusing on the AI risk management published by the National Institute of Standards and Technology (US Department of Commerce) in 2023. It is intended to provide guidance to organisations that design, develop, deploy, or use AI systems to manage the risks and promote the development of trustworthy and responsible AI. The NIST AI RMF outlines four core high-level functions for organisations developing and deploying AI to consider (Govern, Map, Measure and Manage), and each function is broken down into practicable actions and outcomes. The functions are designed to develop a culture of AI risk management (Govern), identify risks (Map), assess, analyse, and track identified risks (Measure) and prioritise and act on risks based on their projected impact (Manage). To help organizations implement these high-level functions, the NIST AI RMF Playbook outlines suggested actions that organisations can voluntary apply based on their needs and interests.

As a companion resource for Generative AI to the NIST AI RMF, the National Institute of Standards and Technology recently published an additional draft Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. This document will provide specific guidance for developers of generative AI models that identifies further action these companies can take to implement the NIST AI RMF in the specific context of generative AI systems.

4. ISO 42001

ISO 42001 is a voluntary, non-binding standard developed by the International Organization for Standardization and the International Electrotechnical Commission in 2023. As explained in our previous blog post, it specifiesrequirements and provides guidance for establishing, implementing, maintaining and continually improving an AI management system within an organisation. In its main part, ISO 42001 establishes ten clauses with several requirements, including establishing an AI policy that is aligned with other organisational policies, AI objectives consistent with the AI policy, and an AI risk assessment and treatment process. Two Annexes provide further implementation controls and guidance for organisations. 

5. White House commitments

In June 2023, the US White House announced that seven major AI developers had agreed to a set of voluntary commitments to ensure that the future of AI development prioritises the principles of safety, security, and trust.  The companies agreed to eight commitments, including: (1) to red-team models for significant societal and national security risks; (2) to share information between companies and with government about safety risks, emergent capabilities, and efforts to overcome safeguards; (3) to invest in cybersecurity to prevent leakage of model weights; (4) to incentivise third parties to responsibly identify and report weaknesses; (5) to enable users to understand if audio or visual content was generated by AI; (6) to publicly report model capabilities and risks; (7) to prioritize safety research; and (8) to develop frontier systems to address society’s largest challenges.  Since then, many more companies have signed on to these commitments.  While the White House Commitments are not binding, they provide a common, high-level foundation for AI risk management across some of the largest US-based AI companies in the world.

6. Different approaches to regulate AI

The aforementioned international approaches differ from the AI Act in respect of legal quality and regulatory approach. Key differences include:

  • The AI Act is a legally binding European regulation, while the OECD AI Principles, G7 Code of conduct, NIST AI RMF, White House Commitments and ISO 42001 are non-binding voluntary frameworks.
  • The AI Act follows a risk-based approach distinguishing between high-risk AI systems, AI systems, general-purpose AI models and general-purpose AI models with systemic risk, while the OECD AI Principles, NIST AI RMF and ISO 42001 do not distinguish between different AI risk categories. The G7 Code of conduct applies generally to ‘advanced AI systems’, which are defined as ‘the most advanced AI systems, including the most advanced foundation models and generative AI systems’.  The White House Commitments take a more forward-looking approach, as they apply to generative models that are ‘overall more powerful’ than each signatory’s most powerful model at the time of signing.
  • The AI Act formally distinguishes between different operator categories along the value chain (provider, deployer, importer, distributor), while the OECD AI Principles, G7 Code of conduct, NIST AI RMF, White House Commitments and ISO 42001 do not. For example, the OECD AI Principles and the NIST AI RMF apply generally to ‘AI actors’, ie those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI.

7. Common concepts with different details

Despite the different regulatory approaches, international frameworks and the AI Act share some common concepts and the AI Act, NIST AI RMF and ISO 42001 followed the definition of ‘AI system’ set out in the OECD AI Principles. 

One example of a common principle is AI literacy. Under the AI Act, providers and deployers must ensure that their staff and other persons using AI systems on their behalf have sufficient skills, knowledge and understanding of the AI system and awareness about the opportunities and risks of AI and the possible harm it can cause. Similarly, ISO 42001 mandates that individuals working under the organisation's control affecting its AI performance must have the necessary competencies. According to the NIST AI RMF, employees and partners should receive AI management system training that enables them to perform their duties and responsibilities consistent with related policies, procedures, and agreements and one of the actions suggested in this context is to educate about potential negative impacts that may arise from AI systems. 

However, differences remain between these international approaches. For example, the competencies required under ISO 42001 are understood as the ability to apply knowledge and skills to achieve intended results and are thus narrower than AI literacy under the AI Act. As a second example, the G7 Code of conduct provides for the support of digital literacy initiatives. However, unlike the AI Act, these initiatives are not limited to individuals involved in the operation and use of AI systems but should aim to promote education and training of the general public. Finally, the AI literacy requirements in the OECD AI Principles are not addressed at companies, but to governments. 

This means that, as outlined in more detail in our previous blog post, some international frameworks can be a helpful starting point to comply with some AI Act obligations, but their provisions will often not be enough to ensure compliance with the EU regulation.

8. Key takeaways

  • Beyond the EU AI Act, many non-binding, voluntary frameworks on AI already exist, such as the OECD AI Principles, G7 Code of conduct, NIST AI RMF, White House Commitments and ISO 42001.
  • International frameworks on AI can be a good starting point for some AI Act obligations, as some key principles overlap at least on a general level with the AI Act.
  • Companies may consider voluntarily adhering to these frameworks but should carefully assess the additional effort required to comply with international standards beforehand.

In our next blog, we will take a closer look at how to unlock synergies between the AI Act and the Digital Services Act.

Tags

ai, eu ai act, eu ai act series