This article was first published on Bloomberg Law on 10 December 2024.
- Freshfields’ attorneys examine generative AI’s global impact
- Companies must understand how regulators approach AI
Artificial intelligence—especially generative AI—is widely expected to bring fundamental change across almost every area of society. AI has potential to enable new and enhanced services, drive business efficiencies, improve health care, transportation, education, and safety, and to be the most pivotal technology of our generation.
While AI is expected to deliver many of these benefits, there are some potential risks—which is why many policymakers believe regulation is necessary to drive and support responsible AI innovation.
A 2023 UK parliamentary report identified at least 15 potential risks from AI, such as AI systems that inadvertently perpetuate discrimination or misinformation, and even the risk that AI surpasses human intelligence—potentially raising ‘long term risks for the future of humanity.’ Separately, a 2024 MIT paper identified more than 700 risks.
Policymakers, and the communities they represent, are racing to work out their approaches to AI. Although policymakers face a multi-dimensional regulatory challenge, views about how AI should be regulated vary across the globe.
The result is that regulatory approaches are fragmenting globally. In the face of this fragmentation, several companies have delayed providing AI services in the EU, for example, due to local laws.
Broad-based cross-border regulatory collaboration seems limited in the near term. The first legally binding treaty on AI signed by the EU, US, and UK, among others, has weak enforcement mechanisms and leaves significant discretion to signatory states. While there is collaboration between national bodies in the form of ‘safety institutes’ to evaluate the safety of the most powerful AI models, their work largely relies on voluntary cooperation.
The EU enacted extensive legislation, modeled after existing product safety laws. Its AI Act features a broad definition of AI and will establish different duties for providers, deployers, and others involved in AI. Some AI will be prohibited or heavily regulated depending on the perceived risk. The law is backed by hefty fines—up to 35 million euros ($38 million) or 7% of annual turnover.
We see a different picture in the US: States such as Colorado and California have AI-focused laws, but those vary greatly. For example, Colorado’s law, like the EU’s, takes a risk-based approach.
The obligations imposed under the EU and Colorado Acts, however, also differ markedly. In further contrast, California passed a series of targeted AI-related laws, but Gov. Gavin Newsom vetoed the most comprehensive proposed law, citing the need for regulation to be based on “empirical evidence and science.”
New AI laws were also enacted in China, and have been proposed in several other countries including Brazil and Canada. The EU, China, California, and Colorado are global outliers with extensive AI-focused laws. Many other jurisdictions’ narrower laws home in on specific aspects, such as transparency (for example in Utah, Illinois, and Maryland, focused on ensuring businesses disclose when AI is used in certain cases), or to tackle deepfake pornography, or to protect the integrity of electoral processes (including in several US states).
Instead, many countries and US states continue to largely or exclusively rely on existing privacy, intellectual property, unfair competition and antitrust, consumer protection, and other laws to regulate AI. While some major jurisdictions have also implemented policies or guidance to streamline how existing regulatory systems apply to AI (including the UK, US, and Singapore), in other cases, businesses must read tea leaves to understand how regulators will approach AI.
Adding to uncertainty, some countries that had ruled out AI regulation are now moving in that direction, whereas other early leaders on regulation (Thailand and China) are now showing a more cautious approach. In 2023, India stated it didn’t intend to regulate AI, then introduced a Digital Personal Data Protection Act to regulate high-risk AI systems, among other things.
In contrast, Thailand was working on a royal decree on AI system service business since 2022, but officials recently indicated they may study regulatory developments in other countries first before finalizing the law. Similarly, President Joe Biden’s 2023 AI executive order set the direction on how the US is approaching AI and providing more guidelines to regulate it, but President-elect Donald Trump has promised to revoke it.
The UK’s new government has indicated it will look to regulate a handful of providers responsible for the most powerful AI models; however, it appears to share its predecessor’s desire to avoid extensive new AI regulation for the time being.
A rapidly evolving regulatory landscape with a patchwork of varying approaches in different countries seems likely to continue. This means that parties interested in cross-border AI projects need to navigate those differences and prepare for regulatory investigations into AI across multiple countries and different frameworks, and the various interfaces of AI regulation with existing local laws.
Businesses developing or deploying AI should ensure they have the right governance structures to navigate these challenges—backed by appropriate personnel, expertise, and professional support. It’s important for businesses to keep abreast of changes and ensure that they, their leaders, and governance frameworks remain flexible.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Reproduced with permission. Published 10 December 2024. Copyright 2024 Bloomberg Industry Group 800-372-1033. For further use, please visit https://www.bloombergindustry.com/copyright-and-usage-guidelines-copyright/