This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minute read

Legally-binding international treaty on AI finalised

After years of negotiations by representatives of over 50 countries, the Council of Europe (CoE) recently adopted the ‘Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law’ (the Convention).  

The Convention is intended to provide a legally binding framework relating to the governance of activities within the lifecycle of AI systems, grounded in the CoE’s standards on human rights, democracy and the rule of law. The CoE hopes that the Convention will become a global instrument to which states from around the world will become parties (not just CoE members).  

Any obligations on businesses will derive from the national laws implemented by states that opt to join the Convention, rather than from the Convention itself. In practice signatory states will have significant scope to vary how they implement their obligations under the Convention, and especially in respect to the private sector. 

In this article we summarise key points of Convention, and how it fits into the emerging landscape of AI governance and its relationship with the EU’s AI Act. 

See our previous blog post for further information on the CoE, the background to its involvement with AI and other successful global governance initiatives it has been involved in. 

Scope

The Convention’s definition of an ‘AI system’ aligns with the definition recently adopted by the OECD and is similar to the definition of AI systems found in the EU’s AI Act.

The Convention covers activities within the lifecycle of AI systems undertaken by public authorities, or private actors acting on their behalf. 

The application of the Convention to activities within the lifecycle of AI systems by other private entities reportedly proved contentious, particularly among EU and US negotiators working on the treaty. The compromise position adopted in the Convention offers countries the choice of either:

  • applying the Convention’s obligations to all private actors; or
  • addressing the ‘risks and impacts’ arising from the activities of private actors in some other manner ‘conforming with the object and purpose of this Convention’. 

In practice this choice is expected to allow for greater divergence in the measures signatory states apply to private actors. 

Subject to a few exceptions, the Convention does not apply to:

  • matters relating to defence;
  • activities related to the protection of national security interests; or
  • research and development activities regarding AI  systems not yet made available for use.

The final Convention also includes a recently added provision acknowledging that federal states (such as the US) may sign the Convention with only the federal government being bound. In such cases the federal government is to encourage the relevant constituent states to implement the Convention. 

General obligations imposed on signatories

Each state party to the Convention would agree to a number of high-level commitments in the context of activities within the AI system lifecycle that are within the scope of the Convention (see the section above). These include obligations on relevant states to maintain certain national laws or other measures aimed at ensuring the following: 

  • the integrity, independence and effectiveness of democratic institutions and processes, including the principle of separation of powers, respect for judicial independence, and access to justice;
  • participation in democratic processes, including fair access and participation in public debate, as well as the ability to form opinions freely;
  • respect for human dignity and individual autonomy;
  • adequate transparency and oversight requirements tailored to the specific contexts and risks, including with regard to the identification of content generated by AI systems;
  • persons are notified that they are interacting with AI systems, rather than with a human;
  • accountability and responsibility for adverse impacts on human rights, democracy and the rule of law;
  • respect for equality (including gender equality) and the prohibition of discrimination;
  • that inequalities are overcome to achieve fair, just and equitable outcomes;
  • privacy and protection of individuals’ personal data; and
  • the reliability of AI systems and trust in their outputs.

The Convention requires that each state must:

  • consider the actual and potential impacts to human rights, democracy and the rule of law posed by AI systems within its scope, and adopt measures for the identification, assessment, prevention and mitigation of such risks; 
  • maintain measures to ensure that any adverse impacts are adequately addressed; and
  • assess the need for a moratorium, ban, or other appropriate measures concerning certain uses of AI systems.

The Convention sets out in some detail how the first of these obligations should be applied by states, including the necessity for monitoring and, ‘where appropriate’, the requirement to test AI systems before making them available. Each signatory is also called upon to establish controlled environments for the development, experimentation and testing of AI systems under the supervision of competent authorities. Additionally, signatories would commit to promoting adequate digital literacy and digital skills for all segments of the population.

Remedies

Remedies for individuals

Each state party is obliged to maintain measures to ensure: 

  • the availability of accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of in-scope AI systems and effective procedural guarantees, safeguards and rights where the impact of human rights violations is ‘significant’;
  • that certain minimal information regarding in-scope AI systems with the potential to significantly affect human rights is documented and made available to affected persons; and
  • an effective means for impacted persons to lodge complaints to authorities.

Disputes between state parties

Parties to the Convention are expected to seek settlement of any disputes between themselves through negotiation or any other peaceful means.

Comparison with the EU’s AI Act

The Convention is a framework intended to apply to both EU and non-EU states. Weighing in at around 12 pages the Convention is positioned at a higher level and is less specific than the several hundred pages of the EU’s forthcoming AI Act. In addition, the Convention aims to protect universal human rights and will impose legal obligations on countries that chose to accede to it, whereas the AI Act focuses on safety within the EU market (including the protection of European fundamental rights) and will directly impose mandatory obligations on public and private entities with a nexus to the EU. 

Nonetheless, the Convention and EU’s AI Act clearly push in similar directions, generally sharing similar values and seeking to take risk-based approaches. 

It is likely that for EU states the Convention’s key obligations will be covered by existing laws, such as EU data protection law and the EU’s AI Act. 

Next steps 

The Convention will be opened for signature by countries around the world on 5 September 2024.

Both CoE and non-CoE states will be able to become parties to the Convention. It would formally enter into force a certain period after five signatories, including at least three member states of the CoE, have expressed their consent to be bound by the Convention.

The Convention envisages a follow-up mechanism and further international co-operation to help share learnings, encourage amicable settlement of disputes, and develop future iterations of the Convention in the future.

While it is an important milestone in the evolution of global AI governance, businesses will need to watch not only which states sign the Convention but also how those states choose to implement their obligations in practice. National implementations seem likely to vary significantly and the Convention is just one part of the emerging AI governance landscape, alongside initiatives such as commitments given by states and AI companies at the recent AI summit held in Seoul and the G7’s voluntary Code of Conduct for organisations developing advanced AI. 

Given the Convention’s high-level nature, the compromise reached on private entities and scope for countries to take different approaches, it appears that the EU’s AI Act will remain the more dominant driver of the global conversation on AI regulation. However, the Convention forms a double act with the EU AI Act, and may help achieve a shallower but broader buy-in to certain CoE and EU-aligned norms from a wider group of states than those that may follow the EU’s AI Act more closely. 

 

Tags

ai, eu ai act