This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minutes read

Legally-binding international treaty on AI – revised draft published

The Council of Europe (CoE) Committee on AI (CAI) recently published a revised draft ‘Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law’ (the Convention). 

The Convention is intended to provide a legally binding instrument on the development, design, use and decommissioning of AI systems based on the CoE’s standards on human rights, democracy and the rule of law. The CAI hopes that the Convention will become a global instrument to which states from around the world (and not just CoE members or those in Europe) will become parties.

In this blog post we summarise key points of Convention and how it fits into the emerging landscape of AI governance. 

What is the CoE, is it part of the EU and why is it focusing on AI?

The CoE is an international organisation that was established following World War II to promote democracy, human rights and the rule of law across the European continent and beyond. It has 46 member states with around 700 million citizens. Most, but not all, of the CoE’s members are from the European continent. Certain other major states that are not members engage with the CoE as ‘observers’, including Canada, Japan, the US and Mexico. 

The CoE is distinct from the EU, which is a supranational political and economic union with 27, exclusively European, member states. All members of the EU are members of the CoE.

Unlike the EU, the CoE cannot make laws. It can, however, seek to encourage states to sign international agreements and have a role in enforcing them. The CoE has a long and proven track record of pioneering governance standards that become global benchmarks, including: 

  • the European Convention on Human Rights, which is now over 70 years old and adjudicated by the CoE’s European Court of Human Rights; and 
  • the landmark Convention 108, of 1981, which is widely-credited as having shaped data privacy regulation in Europe and worldwide. The cross-cultural influence of Convention 108 is reflected in the fact that the date it was originally opened for signature (28 January) is now commemorated in various countries, including the UK, EU states, Nigeria and the US, as an international ‘Data Privacy Day’.

The CoE believes that AI presents both benefits and risks. The CoE is seeking to develop the Convention to help ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law. 

What would the Convention require?

States that chose to participate in the Convention would commit to maintain appropriate laws or other measures relating to AI systems to give effect to the provisions set out in the Convention. 

Scope

The Convention’s definition of an ‘AI system’ aligns with the definition recently adopted by the OECD and is therefore also similar to the definition of AI found the EU’s AI Act (which was heavily based on the OECD definition). The Convention defines AI as:

A machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.

Many other aspects of the scope of the Convention remain the subject of debate. 

In general the proposals focus on circumstances in which research and development activities regarding AI systems have the potential to interfere with human rights, democracy and the rule of law. However, the precise scope and permitted exemptions—including for private sector entities, national security and under national laws—remain to be confirmed.  For example, the US is participating in discussions as an observer and is understood to disagree with the EU’ view that the Convention should apply to private sector entities by default. 

General obligations

Each state party to the Convention would agree to a number of high-level commitments. Those commitments are still being negotiated and may evolve. However, they seem likely to include obligations on each state for maintaining national measures with the aim of ensuring the following in the context of activities within the lifecycle of AI systems: 

  • the integrity, independence and effectiveness of democratic institutions and processes, including the principle of separation of powers, respect for judicial independence, and access to justice;
  • participation in democratic processes and fair access to public debate;
  • respect for human dignity and individual autonomy;
  • adequate transparency and oversight requirements tailored to the specific contexts and risks;
  • accountability and responsibility for violations of human rights (and potentially democracy and the rule of law);
  • respect for equality and prohibitions of discrimination as provided under applicable international and domestic law;
  • overcoming inequalities in line with applicable domestic and international human rights obligations;
  • preservation of human health;
  • adequate safety, security, accuracy, performance, quality, data quality, data integrity, data security, governance, cybersecurity and robustness of AI systems (and potentially also reliability, validity and trust in AI systems); and
  • where an AI system substantially informs or takes decisions or acts in ways impacting on human rights:
    • effective procedural guarantees, safeguards and rights (potentially including certain record keeping and disclosure obligations); and
    • impacted persons are made aware they are interacting with an AI system.

Additional obligations are also under discussion, such as an obligation on each state in relation to:

  • protecting the ability of individuals to reach decisions free from undue influence or manipulation by AI systems;
  • measures to enable detection and transparency of content generated by AI systems;
  • privacy and protection of data;
  • protection of the environment;
  • protection of whistleblowers;
  • measures for the identification, assessment, prevention and mitigation of risks and impacts to human rights, democracy and rule of law arising from the design, development, use and decommissioning of AI systems; and 
  • laws or other measures as may be required to implement mechanisms for a moratorium, ban or other appropriate measures in respect of uses of AI systems that are considered incompatible with human rights, the functioning of democracy and the rule of law.

Each state party to the Convention would also commit to:

  • establishing or designating one or more effective mechanisms to oversee compliance with its obligations; and
  • taking measures to ensure accessible and effective remedies for harms caused by AI systems. 

In the event of a dispute among states that are party to the Convention, EU states would be able to rely on various EU mechanisms and in other cases the parties are expected to seek settlement of the dispute through negotiation or any other peaceful means of their choice.

Comparison with the EU’s AI Act

The Convention is a framework intended to apply to both EU and non-EU states. Weighing in at under 20 pages the Convention is positioned at a higher level and is less specific than the several hundred pages of the EU’s forthcoming AI Act. In addition, the Convention aims to protect universal human rights and will impose legal obligations on countries that chose to accede to it, whereas the AI Act focuses on safety within the EU market (including the protection of European fundamental rights) and will directly impose mandatory obligations on public and private entities with a nexus to the EU. 

Nonetheless, the Convention and EU’s AI Act clearly push in similar directions and generally share similar values. 

It is likely that for EU states the Convention’s measures will largely be covered by existing EU laws, such as EU data protection law and the EU’s AI Act. With limited exceptions (eg protections for whistleblowers) it understood that the EU does not want the Convention to generally go further than the EU’s AI Act. 

Next steps 

Work on the Convention began back in 2019, long before generative AI took the world by storm in 2023. As we have reported previously, in 2023 governments around the world greatly accelerated efforts to regulate AI both nationally and internationally. We expect the CoE to aim to finalise the Convention as soon as possible, even though the original 2023 target date has now been missed. 

Both CoE and non-CoE states will be able to become parties to the Convention. It would formally enter force a certain period after five signatories, including at least three member states of the CoE, have expressed their consent to be bound by the Convention.

The Convention envisages a follow-up mechanism and further international co-operation to help share learnings, encourage amicable settlement of disputes, and develop future iterations of the Convention in the future. 

Tags

ai, data, data protection, eu ai act, eu ai liability directive, eu data act, eu digital strategy, europe, global