The AI Seoul Summit, co-hosted by South Korea and the UK, took place on 21-22 May 2024. The summit focused on safety, innovation and inclusivity in regard to ‘frontier AI’ - meaning highly capable general-purpose AI models or systems that can perform a wide variety of tasks and match or exceed the capabilities present in the most advanced AI models. Meetings were generally held virtually and brought together governments, AI companies and other stakeholders.
Key outcomes included:
- new voluntary AI safety commitments from leading AI companies;
- agreements between 10 countries and the EU to collaborate more closely on AI safety and AI governance, including the establishment of a new network of AI Safety Institutes; and
- agreement among 27 nations and the EU to work together to identify thresholds at which AI model capabilities could pose severe risks.
Below we give further detail on those outcomes and next steps.
What was the background?
The Seoul Summit was intended to build on the AI Safety Summit hosted by the UK at Bletchley Park in November 2023. The Seoul Summit aimed to address three priorities relating to frontier AI:
- Safety: To reaffirm the commitment to AI safety and to further develop a roadmap for ensuring AI safety;
- Innovation: To emphasise the importance of promoting innovation within AI development; and
- Inclusivity: To champion the equitable sharing of AI’s opportunities and benefits.
What were the key outcomes?
Frontier AI Safety Commitments
The summit saw 16 AI tech companies with headquarters spanning North America, Asia, Europe and Middle East agree new voluntary Frontier AI Safety Commitments.
The voluntary commitments given by each company include:
- implementing current best practices related to frontier AI safety (eg internal and external red-teaming and working toward information sharing, alongside various other specifically listed steps);
- developing and continuously reviewing internal accountability and governance frameworks and assigning sufficient resources to do so;
- publishing safety frameworks outlining how it will measure risks of their frontier AI models; and
- being publicly transparent about the implementation of its commitments.
Each company also agreed:
- that its published safety framework would outline the thresholds at which risks (unless mitigated) would be deemed by it to be ‘intolerable’ and the steps it would take to ensure those thresholds are not surpassed;
- to not develop or deploy an AI model or system if mitigations cannot keep risks below those thresholds; and
- that details of its thresholds will be being released before a successor international AI safety summit to be held in France in early 2025.
The Seoul Declaration
Australia, Canada, the EU, France, Germany, Italy, Japan, South Korea, Singapore, the US and the UK signed up to the ‘The Seoul Declaration’. That declaration aims to foster international cooperation and dialogue on AI ‘in the face of its unprecedented advancements and the impact on our economies and societies’.
Among other things the signatories agreed:
- AI safety, innovation, and inclusivity are inter-related goals;
- the importance of interoperability between AI governance frameworks in line with a risk-based approach;
- to advocate for policy and governance frameworks, including risk-based approaches, that foster safe, innovative and inclusive AI ecosystems; and
- to strengthen international cooperation on AI governance through engagement with other international initiatives at the UN and its bodies, the G7, the G20, the OECD, the Council of Europe, and the Global Partnership on AI.
New global AI safety network
The months following the November 2023 AI Safety Summit in Bletchley Park have seen a steady growth in the number of countries establishing government-backed ‘AI Safety Institutes’ to focus on ensuring the safety of AI.
During the Seoul Summit, the countries that had signed the Seoul Declaration (above) also signed on to the ‘Seoul Statement of Intent on AI Safety Science’ and its plans for a new global network of AI Safety Institutes.
Aims of the new network include furthering a common understanding on aspects of AI safety, including by promoting complementary and interoperable technical methodologies and overall approaches regarding AI safety. The new network envisages collaboration based on strengthening:
- coordination and efficiency;
- research, testing, and guidance capacities;
- information sharing;
- monitoring of AI harms and safety incidents; and
- the use of shared technical resources for purposes of advancing the science of AI safety.
The new network will strengthen and expand on various previously announced bilateral collaborations between AI Safety Institutes. The US Department of Commerce indicated it hoped the new AI Safety Institute network will ‘catalyze a new phase of international coordination on Al safety science and governance’.
Shared risk thresholds for frontier AI
The final day of the summit saw 27* nations (including the US and UK but not including China) and the EU agree a Ministerial Statement in which they set the ambition to develop shared risk thresholds for frontier AI and identify when AI model capabilities could pose ‘severe risks’. Examples of such severe risks may include helping malicious actors acquire chemical or biological weapons or relate to AI’s ability to evade human oversight.
The signatories will aim to develop their proposals ahead of the AI Action Summit to be hosted by France in early 2025.
Next steps
The commitments made in Seoul represent another meaningful development in international alignment on AI governance and norms and help lay the groundwork for the next international AI safety summit to be held in France in 2025.
The summit’s final Ministerial Statement included reference to several high-level shared objectives including (for example):
- facilitating access to AI-related resources (especially for smaller and medium sized businesses);
- respecting and safeguarding intellectual property rights; and
- encouraging AI developers and deployers to take into consideration their potential environmental footprint.
Those issues seem likely to be addressed in greater depth at future summits and underscore the plethora of AI-related issues governments are now grappling with.
The voluntary nature of the new ‘Frontier AI Safety Commitments’ agreed by AI companies has been criticised by some observers. However, voluntary commitments have long been a tool of governance in many spheres, and especially in the emerging technology space. For example, in 2023 a number of major tech companies signed up to a set of voluntary AI principles championed by the White House. The new Frontier AI Safety Commitments represent an important development in that they were made by companies headquartered around the globe. The fact that both Chinese and US companies have participated is particularly significant given the strength of those countries’ AI tech industries and their differing approaches to technology regulation generally.
The list of signatories to the Seoul Declaration, new AI Safety Institute network and final Ministerial Statement shows the ever-increasing cooperation, alignment and sophistication of governments around the world in addressing AI risks. However, it is notable that the signatories to most of these statements are generally limited to a group of Western-allied democracies.
*List of parties agreeing the Ministerial Statement:
- Australia
- Canada
- Chile
- France
- Germany
- India
- Indonesia
- Israel
- Italy
- Japan
- Kenya
- Mexico
- the Netherlands
- Nigeria
- New Zealand
- the Philippines
- the Republic of Korea
- Rwanda
- the Kingdom of Saudi Arabia
- the Republic of Singapore
- Spain
- Switzerland
- Turkey
- Ukraine
- the United Arab Emirates
- the United Kingdom
- the United States of America
- representative of the European Union