This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minute read

The AI Safety Summit – what is next for the global AI governance landscape?

The UK will host the world’s first AI Safety Summit on 1-2 November 2023. The summit, which will be held at Bletchley Park in England, aims to bring together key countries and other stakeholders to focus on risks that might be created or exacerbated by the most powerful AI systems, and obtain agreement on international action regarding AI development.

In the first article in this series we explained the objectives of the summit and outlined current approaches countries are taking to regulate to AI within their own borders.

In this article we examine options that exist for international action to regulate AI and the prospects of a global AI regulatory framework.

Are global AI laws or a global AI regulator likely to emerge from the summit?

The UK’s objectives for the summit include ‘a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks’. 

Global governance systems to address international risks are not unprecedented. For example, the UN’s International Civil Aviation Organization (ICAO) is responsible for worldwide alignment of air regulations and procedures. India’s Prime Minister recently called for a global framework on the ethical use of AI and cited the aviation and financial sectors as examples of similar global coordination. It is understood that the UK government may also see the longer-term future of AI governance as being similar to that of financial markets — ie, national regulations buttressed by an international safety net to protect an overall global system. 

However, the summit seems unlikely to lead to global AI laws or a global AI regulator in the short or medium term. Regulation of most matters, even — and perhaps especially — those relating to technology and data, tend to be based on the laws of specific jurisdictions or individual states. For example, the handling of personal information is governed by a mix of laws specific to individual jurisdictions, such as the EU’s General Data Protection Regulation (GDPR), China’s Personal Information Protection Law and the California Consumer Privacy Act. 

The global approach to AI governance, as with personal information laws, will need to recognise cultural differences and that global alignment may not be appropriate or possible. 

There also seems to be little global consensus at present on what, if any, aspects of AI governance need to be internationalised. While many global policy makers are focusing on AI risks, some policy makers seem likely to take the view that the existential risks that will form the focus of the summit are generally theoretical – at least when compared to certain other risks facing global society.  

The US and China, which are widely seen as the global leaders in AI, are geopolitical competitors. Both have been invited to the summit and are keenly aware of the potential of AI in both the economic and military spheres. All counties are likely to be cautious about any global action that may inhibit the exploitation of AI opportunities and perceived national advantages. 

Matt Clifford, the UK Prime Minister’s Representative for the AI Safety Summit, has stated the immediate aims do not include setting up a single new international institution. He indicated that the UK’s view is that most countries will want to develop their own approaches to evaluating frontier models.

According to reports, the UK is hoping to secure a joint statement on AI risks following the summit. 

What options, apart from regulation, exist for international action?

While there seems to be little appetite for a global AI regulator, there are number of other options. For example, the UK White Paper on AI reflects the UK government’s view that regulation is not always the most effective way to support responsible innovation and seeks to align with a range of other tools such as voluntary guidance and technical standards.

Voluntary commitments have long been a tool of governance in many spheres, and especially in the emerging technology space. In July 2023 eight major tech companies signed up to a set of voluntary principles championed by the White House. The commitments note that the companies intend the voluntary commitments to remain in effect until regulations covering substantially the same issues come into force. The EU and US are also co-operating to develop a voluntary code of conduct, and the EU is understood to be seeking voluntary codes as a stopgap before its AI Act becomes applicable. Major technology companies will be among the key stakeholders invited to the AI Safety Summit.

International standards are also often developed and voluntarily aligned with by many organisations across numerous spheres. For example, ISO 27001 is a well-known security standard that many organisations comply with to demonstrate their reliability and, in many cases, meet minimum security requirements to tender for work. A huge amount of standards development is currently ongoing in the AI space and standards seem likely to play a large role in the future of AI. In 2022 the UK launched an AI Standards Hub to inform international standardisation efforts and support organisations to participate in those efforts. The voluntary AI Risk Management Framework (AI RMF) released by the US National Institute for Standards and Technology (NIST) has already had a major influence in the US and further afield. 

In October 2023 it was announced that the US NIST and Singapore’s Infocomm Media Development Authority (IMDA) had completed a joint mapping exercise between the Singaporean ‘AI Verify’ governance test framework and the US NIST AI RMF. The initiative published a crosswalk between the two frameworks to further harmonise international AI governance and reduce compliance costs for the AI industry.

While compliance with standards or other commitments may strictly be voluntary, they can often become practically mandatory for businesses. In particular, it can be challenging for a business not to ‘follow the pack’ on meeting standards, once those standards become widely accepted by both suppliers and customers, and potentially relevant to the availability of insurance for AI risks. 

Whether or not the forthcoming summit leads to any further specific standards work or voluntary principles remains to be seen. However, both may offer a practical alternative, or complement, to national regulation and cooperation between governments.

Are there examples of global regulation that suggest a path forward for global AI governance?

The privacy landscape may suggest a possible practical way forward. Major jurisdictions have generally cherry-picked and adapted ideas from one another in crafting their own national privacy laws. Even the UK, which inherited the GDPR after it exited the EU, is now planning certain data protection reforms that will diverge from the EU’s GDPR. Global privacy laws remain significantly unaligned in several areas and still fail to address many global challenges, including data localisation and the facilitation of international data transfers. 

Yet, despite those national differences, many approaches to the regulation of data privacy (eg privacy impact assessments, enhanced transparency and consent requirements, and specific governance requirements) have now become conventional around the world, including across jurisdictions with divergent cultures such as California, the EU and China. Most countries now have data privacy laws. Globally, privacy governance is also supported by widely recognised international standards (eg ISO 27001) and other schemes that organisations voluntarily comply with.

The development and alignment of privacy laws has been helped by nations discussing global principles. The Council of Europe’s ‘Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data’ — a treaty widely known as Convention 108 — was ratified by over 50 countries. The cross-cultural influence of Convention 108 is reflected in the fact that the date it was originally opened for signature (28 January) is now commemorated in various countries, including the UK, EU states, Nigeria and the US, as an international ‘Data Privacy Day’.  

The AI Safety Summit which will close on 2 November aims to spark an international conversation on how countries can work together to improve AI safety and share understandings. Even if 2 November does not become ‘AI Day’, the evolution of privacy laws shows the potential for a global exchange of ideas to help countries move along their different paths in a common direction. 

Tags

ai, data, data protection, eu ai act, eu ai liability directive, global, gdpr, tech media and telecoms, standards