Freshfields is Knowledge Partner for the Responsible AI Forum 2024. Hosted by the Rothschild Foundation and organised by Tortoise Media, the annual Responsible AI Forum sees well-known figures from the business and academic worlds have thoughtful and pragmatic discussions around AI innovation, application and regulation.
(First published by Tortoise Media, 6 March. Republished with permission. Copyright 2024 Tortoise Media.)
Governments and regulators are racing to develop responses to AI’s potential risks and opportunities. While almost everyone agrees on the need for governments to step in, there’s no global consensus on how to do it.
So what? A range of different interests, cultures and policy perspectives on AI is driving differing regulatory approaches:
General regulation. The EU is taking the lead on comprehensive legislation. Its AI Act, which is nearing completion, defines AI broadly and will establish different duties for AI users and various actors involved in the supply of AI.
Different AI uses will be subject to prohibition or varying degrees of regulation, depending on the perceived risk.
For example, using AI to filter job applicants will be highly regulated but not prohibited.
Non-compliance will result in hefty fines – up to €35m or seven per cent of a business’s annual global turnover.
A regulatory superpower, the EU’s AI Act will have global weight is already influencing draft laws in Canada and Brazil.
Other jurisdictions are limiting themselves to regulating specific AI use cases:
- Chinese regulations are focused piecemeal on governing generative AI, recommendation algorithms and deepfake technologies.
- US President Joe Biden’s Executive Order, issued last year, directed federal agencies to draw up rules on government use of AI and to impose obligations on certain AI developers and infrastructure providers.
Existing laws repurposed. Some countries are refreshing old laws. In addition to the 2023 Executive Order, the US is generally relying on existing regulators like the Federal Trade Commission for governance at the federal level. Elsewhere:
- In Europe, national data protection authorities are using the EU’s General Data Protection Regulation to issue new guidance on AI.
- The UK has outlined five AI principles for regulators to apply within their existing remit.
- Singapore has launched voluntary initiatives and guidance.
A few countries are taking a wait and see approach. South Africa, for example, has made no policy decisions on AI yet.
Highest level of AI specific laws achieved, selected jurisdictions
Countries are starting to coordinate between themselves internationally. The G7 has published a voluntary AI “code of conduct” for advanced systems, and the Council of Europe is working on a legally binding international treaty on AI. Following the UK’s AI safety summit in November last year, two more such conferences are lined up for 2024 in Korea and France.
Common themes. Despite differing national approaches, some common regulatory concerns are emerging, including:
- Transparency – Disclosure and explanation when AI is used.
- Fairness – Ensuring AI systems produce fair outcomes, including by not discriminating on factors like gender or race.
- Safety – Guarding against the use of AI in ways that could harm society or individuals, with particular attention paid to the use of powerful “general purpose” AI models.
- Innovation – Balancing all of the above with the imperative to not stifle innovation.
Implications. Businesses using AI will need to respond simultaneously to the rapid pace of AI innovation and to continually evolving AI governance. They’ll need to keep in mind regulations when it comes to:
- AI governance, including developing, testing and launching AI use case
- Engaging with key stakeholders, such as investors and workforces.
- Allocating risk and ownership of AI assets.
- Investing in businesses developing or deploying AI.
To succeed, multinational businesses should focus on the common regulatory themes that are emerging, and be prepared to adapt to the toughest regulatory regimes where they do business.
For further insights on navigating global AI regulation and the factors that companies must monitor, watch this video with Freshfields Partners, Giles Pratt, Natasha Good and Beth George.