Energised by our attendance at the recent Tortoise Responsible AI Forum, Paris AI Action Summit and City & Financial AI Regulation Summit, the international Freshfields AI team looks at five key themes for businesses to take away from the latest AI discussions:
1. Businesses must respond to growing AI legal risk from traditional legal frameworks
Existing legal regimes, including privacy, consumer protection and competition law, have provided fertile ground for regulators to launch investigations relating to AI – especially in the US and Europe.
Crucially, regulators aren’t the only enforcers; we are seeing more private AI litigation, including mass claims, with the potential to leverage global litigation funding of around $19bn. Examples of AI actions, include claims focusing on alleged discrimination, unlawful data use, and misstatements overstating AI capabilities.
2. AI FOMO in the mainstream
But even with that legal backdrop, we don’t expect the pace of AI development and adoption to slow. A mixture of innovation culture and a fear of falling behind competitors is spurring companies’ AI ambitions.
That FOMO is even being documented in the filings of many listed companies.
3. AI governance to focus on AI’s value-adds and the top legal risks
Conversations about AI governance, which blends legal and ethical issues, are now very much the norm for businesses.
A decade ago many data companies thought they had proprietary datasets, but it often turned out that either the data wasn’t theirs or it wasn’t unique. Businesses developing or deploying AI today need to avoid the same trap. Leaders should ensure their organisations secure:
- rights to use the data used to train and refine their AI systems; and
- protectable rights in valuable AI developments and outputs.
In addition to locking in protectable value, the top two legal risk issues tend to be managing specific AI laws and dealing with data provenance.
On the first of those, although the world has seen fewer AI-specific laws enacted than was expected a year or so ago, businesses need to watch out for specifically banned or heavily regulated (often branded ‘high risk’) AI practices. AI laws in China, the EU, some US states and South Korea will prohibit or highly regulate specific AI use cases (for example, for certain recruitment, emotion recognition, insurance or credit scoring use cases).
When it comes to data provenance, innovation teams are escalating a range of questions about contractual digital rights management, intellectual property and privacy.
4. Agentic AI is buzzing
Everyone is talking about agentic AI.
Although the building blocks for agents and AI have been around for a long time, the renaissance of agentic AI – with increasing autonomy from humans and greater scope for real-world impacts – is likely to see new questions asked about who is responsible for AI actions across the value chain.
5. Grow, bAIby, grow
Agents, reasoning and efficiency gains are all good reasons to think of 2025 as the year in which AI spurs economic and business growth. Investments announced in the US, in Paris and UK are certainly lining up behind that desire.
We expect there to be a big push this year for the legal environment to support building resilient AI-powered businesses.
Each business will need legal teams that can work alongside it on their unique AI journey, managing the big legal risks and locking in protectable value.