Thanks to generative AI taking the world by storm, AI is now receiving significant focus from policy makers. Businesses and governments are putting enormous energy into exploring how they can leverage the many potential opportunities of AI, as well as how risks can be mitigated.
As part of those global efforts, the UK will host the world’s first AI Safety Summit on 1-2 November 2023. The summit, which will be held at Bletchley Park in England, aims to bring together key countries and other stakeholders to focus on risks that might be created or exacerbated by the most powerful AI systems, and obtain agreement on international action regarding AI development.
The forthcoming summit recognises that the opportunities and risks of AI, like all digital technologies, are global and may require co-ordinated action to address them. In this article we draw on our international experience advising in the AI, data and technology space to explain the objectives of the summit and current approaches countries are taking to regulate AI within their own borders.
In a second blog post we explore options for international action to regulate AI.
What is the focus of the summit and its objectives?
The UK government, which is hosting the summit, has been clear that it understands the vast potential of AI to make a positive impact to the world economy and in people’s daily lives. Against this backdrop, the summit will focus on risks created or exacerbated by the most powerful AI systems, as well as how safe AI can be used for public good and to improve peoples’ lives across a variety of use cases.
The summit is expected to focus on big-picture existential risks, rather than the immediate risks that many organisations developing and using AI are already familiar with (eg potential bias, hallucinations and deep fakes). The summit will centre its attention on ‘frontier AI’ — meaning highly capable general-purpose AI technologies that may be further developed for a variety of applications.
The UK government’s press release specially highlights:
- misuse risks, such as where a bad actor aided by new AI capabilities in biological or cyber-attacks may develop dangerous technologies; and
- risks that could emerge from advanced AI systems escaping human control.
The UK’s five objectives of the summit are:
- a shared understanding of the risks posed by frontier AI and the need for action;
- a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks;
- appropriate measures which individual organisations should take to increase frontier AI safety;
- areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance; and
- showcase how ensuring the safe development of AI will enable AI to be used for good globally.
A wide variety of other international policy dialogue is ongoing regarding aspects of AI, including among the OECD and the G7’s Hiroshima AI Process.
What steps are countries taking to regulate AI within their own borders?
There are three layers of regulation relevant to AI:
- general regulation that is technology and sector-neutral (eg privacy laws);
- sector-specific regulation (eg for financial services); and
- AI-specific regulation.
While many existing general and sector laws already regulate aspects of AI, policy discussions around the globe have increasingly focused on the question of whether AI should be specifically regulated, and if so how.
The EU, which is currently finalising an AI Act and AI Liability Directive, is widely seen as a pioneer in introducing legislation exclusively focused on AI. Once applicable, those new laws will impact a wide range of actors, including providers, distributors and users of AI, both within the EU and further afield. The EU’s AI Act will likely include provisions imposing obligations in relation to various aspects, including risk management systems, information to be given to users, accountability, documentation, safeguards, and requirements for self-certification — all of which will be backed by significant financial penalties. The AI Act contains provisions specifically designed to encourage AI innovation, such as by promoting regulatory sandboxes. Several other jurisdictions, such as Canada and Brazil, are already planning to introduce specific AI laws and are expected to take inspiration from the EU’s AI Act.
The AI Act may be agreed by the EU institutions later this year. However, China has already beaten the EU in the race to become the first major economy to introduce AI-specific laws. China has implemented several AI-specific laws, including a law regulating deep fakes, a law targeting recommendation algorithm services and, most recently, the ‘Interim Measures for the Management of Generative Artificial Intelligence Services’ (the Measures) which entered into force in August 2023. The Measures apply to the use of generative AI products providing services to the Chinese public, and include various requirements relating to the prevention of discrimination, respect for intellectual property and other third-party rights, transparency, accuracy and reliability, for example.
As the titles of their respective laws suggest, a crucial difference of approach is that China has sought to regulate specific uses and perceived risks of AI, whereas the EU’s AI Act is seeking to regulate AI more generally (albeit with different requirements for certain types of AI and use cases).
Many other countries are taking a less direct approach to AI regulation, and are often seeking to work within existing laws and regulatory structures.
The UK government has proposed a ‘pro-innovation’ regulatory framework based on five overarching principles to guide the development and use of AI, and envisages its existing regulators taking responsibility for applying them in practice across sectors.
The US White House has released a Blueprint for an AI Bill of Rights, which, while highlighting some key priorities in the private sector, imposes no enforceable regulatory requirements. At the US federal level, existing regulators, such as the Federal Trade Commission, have stepped in to explain how they will use their current authorities to regulate AI. The White House is expected to shortly issue an Executive Order on AI; while details are currently limited, the order is expected to build on various voluntary commitments already secured by the White House from some key private sector companies.
Freshfields has been tracking approaches to AI regulation across many major jurisdictions and a summary is shown in the heatmap below. Highly divergent national approaches to AI regulation may risk burdening the AI industry, and potentially inhibiting innovation.
Note: Data for federal states (eg USA, Canada and Australia) relates to laws at federal level. Data for the EU relates to EU, rather than member state, laws.
So regulatory action targeting AI is being taken by many countries around the world.
But what options exist for international action to regulate AI and what is the prospect for a global regulatory framework? Click here to see a blog post that considers those important issues.