With countless applications and the potential to revolutionise various industries, Artificial Intelligence (AI) has taken the world by storm. To promote the safe, secure and trustworthy worldwide use of AI, the G7 recently released a set of 11 Guiding Principles and a voluntary Code of Conduct for advanced AI systems. Both documents aim to provide guidance to organisations developing advanced AI systems, with the Code of Conduct providing detailed and practical guidance to support the higher level Guiding Principles.
The G7’s publications came during a busy period for AI governance and are part of a wider jigsaw of national and international efforts to grapple with the question of how AI should be governed and how nations can align their policies. Alongside the G7’s publication, the US president issued a landmark executive order on ‘safe, secure, and trustworthy’ AI, and both came just a few days before the start of an international AI Safety Summit in the UK.
In this article we summarise the contents of the G7 Guiding Principles and Code of Conduct, and compare them to the draft EU AI Act and the UK AI regulatory landscape.
The G7 Guiding Principles and Code of Conduct
The G7 is an intergovernmental forum consisting of Canada, France, Germany, Italy, Japan, the UK, the US and the EU. The new Guiding Principles and Code of Conduct were produced as part of the G7’s ‘Hiroshima AI Process’ (established in May 2023) which aims to advance discussions on inclusive AI governance and interoperability to achieve trustworthy AI, in line with the G7’s shared democratic values.
To minimise risk with the use of AI, the Guiding Principles and Code of Conduct, in conjunction, present a non-exhaustive list of actions applicable to organisations in both the public and private sector. They cover a wide range of topics including design, development, deployment and use of advanced AI systems.
These actions include, for example:
- Taking appropriate measures to identify, evaluate, and mitigate risks across the AI lifecycle. This includes implementing internal and external testing measures and actively monitoring and documenting vulnerabilities post-deployment as appropriate depending on various risk factors explained further in the Code of Conduct.
- The regular publication of transparency reports and information on privacy and governance arrangements, in order to ensure safe use of AI systems.
- Using privacy-preserving techniques for training AI systems and implementing appropriate safeguards to protect intellectual property rights.
- Ensuring that users of AI systems can identify AI-generated content using content authentication and provenance mechanisms (eg watermarking).
Keeping in mind the global impact of AI systems, the G7 also calls for collaboration and the development of global technical standards.
EU AI Landscape
The EU’s interest in regulating the use of AI started in April 2021, when the draft AI Act was first proposed by the EU Commission. Many aspects of the new G7 Guiding Principles and Code of Conduct appear to be modelled after elements of the draft EU AI Act. The draft EU AI Act largely takes a risk-based approach, applying different obligations to AI systems depending on various perceived ‘risk’ factors. The G7 Guiding Principles and Code of Conduct also suggests a risk-based approach.
The European AI strategy as a whole aims to make AI trustworthy and human-centric. The upcoming AI Act is supposed to echo that through a set of complementary, proportionate and flexible rules, which are in line with European values. The rules include measures to minimise risks (including conformity assessments for high risk AI Systems), transparency obligations and a governance structure at a European and national level. All of those objectives are broadly in-line with the G7 documents. However, the EU AI Act is more comprehensive and detailed and will impose mandatory obligations backed by heavy fines for both developers and users of AI systems. The EU AI Act also seems likely to take a broad approach to the definition of in-scope AI, whereas the G7 documents focus on ‘advanced’ AI systems. For further background on the AI Act, see here.
With the second to final legislative Trilogue for the AI-Act finalised in the last weeks, the EU is on a trajectory to potentially become the global player with the most comprehensive AI legal framework by the end of the year.
UK AI regulatory landscape
The UK’s proposed regulatory approach, summarised in a 2023 White Paper, envisages a nimble and light-touch governance for AI centered upon:
- no new AI specific laws and a minimal coordinating role for government;
- the establishment of a regulatory framework comprised of five overarching principles for all relevant existing UK regulators to apply existing laws in relation to AI;
- proposals designed to promote tools for trustworthy AI, introduce regulatory sandboxes, address capability gaps within regulators and work with international partners.
Click here to read a more detailed blog post on the UK’s approach.
Various aspects of the UK’s approach are echoed in the G7 Guiding Principles and the Code of Conduct. For example:
- The non-exhaustive list of actions proposed by the Guiding Principles and the Code of Conduct mirrors the flexible approach taken by the UK so far, as can be seen from the UK’s proposed approach to regulating AI set out in its AI White Paper.
- The emphasis on considering the AI lifecycle as a whole, when identifying and evaluating risk, also echoes the UK data protection authority’s guidance on AI and data protection.
- The emphasis on having global standards for AI is similar to the UK government’s initiative to shape global standards for AI, with the launch of the AI Standards Hub in October 2022.
However, the Guiding Principles and the Code of Conduct seems to go beyond the UK’s current approach on certain issues. For example, the UK AI White Paper highlights a pro-innovation approach aimed at businesses, while the G7 documents prioritise the development of AI for the use of tackling global challenges (eg climate crisis, global health, education). The UK approach also takes a broad approach to the definition of AI, whereas the G7 documents focus on ‘advanced’ AI systems.
Some of the challenges which form a focus of the Guiding Principles and the Code of Conduct were on the agenda of the international AI Safety Summit held in the UK on 1–2 November 2023. See our blog post for further background.
What’s Next?
With more and more countries taking steps to regulate AI, it will be interesting to see how countries will align their policies. G7’s Hiroshima AI Process is part of a wider range of international discussions on guardrails for AI, including at the OECD, the Global Partnership on Artificial Intelligence, the EU-US Trade and Technology Council and the UK-hosted international AI Safety Summit mentioned above.
The Guiding Principles and Code of Conduct put forward by the G7 may serve as a first effort for global alignment and standardized approaches to governance but both are expected to be ‘living documents’ that evolve in the future.
The G7 leaders have called on organisations developing advanced AI systems to commit to the application of the International Code of Conduct and the first signatories are expected to be announced soon.
While the Guiding Principles and the Code of Conduct merely provide ‘voluntary’ guidance for actions by organisations and governments, the similarities in the underlying principles of the draft EU AI Act, and UK AI White Paper suggests that these principles can serve as helpful guidance for businesses when they are considering their AI governance.
Given that the Hiroshima Process documents anticipate different jurisdictions, even within the G7, taking their own unique approaches to implementing the suggested actions, it may be a while until we see an agreed standard form of global AI governance.