Freshfields is a Knowledge Partner on the Responsible AI Forum 2025, hosted at Spencer House by Tortoise Media, in partnership with the Rothschild Foundation. The Responsible AI Forum sets out an agenda for the development, deployment and regulation of artificial intelligence as a responsible technology for society, government and business.
(First published by Tortoise Media, 10 February. Republished with permission. Copyright 2025 Tortoise Media.)
New laws specifically targeting the development or use of AI have been generating extensive policy debates and political noise. Examples include the EU’s AI Act, Chinese genAI legislation and US state AI laws. So it’s easy to lose sight of the fact that most countries haven’t introduced wide-ranging laws specifically targeting AI, and that many of the AI laws that have been enacted will not be fully binding on businesses for several years.
So what?
The heavy lifting in terms of AI regulation and governance will continue to largely rely on a patchwork of pre-existing and overlapping laws.
AI regulatory heat
Existing laws are being actively enforced by regulators in relation to AI, for example:
- Privacy regulators have taken a lead in regulating AI by launching investigations and enforcement action. In some cases, this has resulted in model launches being delayed and in models being taken offline.
- Consumer protection bodies such as the US Federal Trade Commission (FTC) have been proactive in their assessment of AI. The FTC outlined that AI-related advertising claims, frauds, and scams will be among the areas it will focus on. US regulators are also targeting companies that are alleged to have overstated their AI capabilities (so-called ‘AI washing’).
- Antitrust / competition authorities in the US, EU and UK have focused on AI, including AI-related collaborations and acquisitions.
- Many sectoral (eg financial services) regulators are also looking into AI.
AI cold snap
Private AI-related disputes are stacking up, and risk having a chilling effect on innovation. They include:
- Allegations of discrimination or unfair outcomes following AI decision-making, for example impacting consumers.
- Allegations of AI systems being unlawfully trained on others’ data and infringing intellectual property rights.
- Claims relating to the use of AI in hiring or workplace decision-making. AI that could monitor employee work may also require prior engagement with employee representatives.
- AI washing claims brought by investors in the US.
- Shareholder activism, seeking greater disclosure from companies about their AI development plans and approach to deployment.
There are also increasing numbers of AI-related class actions in the US. We expect other jurisdictions, including in Europe, to follow that trend toward mass claims, with mass claims firms already advertising themselves as AI or tech specialists.
Forecasting the legal risks of AI
Predicting the legal challenges to AI isn’t just about tracking legislative moves. The table below shows the mismatch between which countries have major AI laws in place or in the pipeline, and those countries that typically pose the highest risk for companies developing or deploying AI systems.
![](https://files.passle.net/Passle/5677e7453d947406989fe60a/MediaLibrary/Images/2025-02-11-14-15-08-431-67ab5b6cc921c768e8257581.png)
Legal reality check
Making your business AI-ready now means embedding legal risk and opportunity into your AI governance. Businesses need to navigate multiple regulatory regimes within each country as well as between different countries, and adapt to the toughest relevant laws, from privacy, to IP, to consumer and employment protection laws, to sectoral regulation to antitrust. That takes time to embed, and means you need a team that can see all the angles and turn that into a coherent strategy.
What’s more
Despite some companies holding back launch plans in Europe, most businesses aren’t going to put their AI ambitions on ice just because the legal water around them is choppy. Many organisations are turning to AI-related collaborations as a way of accelerating growth and hedging risk. The next battleground is how to get the balance right in those contracts – both in terms of maximising opportunity and the creation of protectable value, and allocating risk in case a regulator or litigant comes to call.