Over the past couple of weeks, we’ve been at the heart of two major events focusing on the implications of AI for global businesses.
Hosted by the Rothschild Foundation, and organised by Tortoise Media, the 2024 Responsible AI Forum at Waddesdon Manor in the UK saw well-known figures from the academic, policy making, regulatory and business worlds have insightful discussions around AI innovation, its impact in society, business adoption and the role of regulation. Freshfields was well-placed to be the knowledge partner for this event, leading with insights into the current state and direction of AI regulations around the world. The forum generated wide ranging dialogues over AI responsibilities and governance among thought leaders.
A few days earlier, Freshfields hosted a roundtable discussion in London with over 50 senior representatives of listed companies to discuss the practical steps that businesses should be taking to harness AI-led opportunities, whilst understanding and managing risks. That meeting saw fruitful exchanges of ideas around emerging AI regulation, protection of value in AI activities, legal risks and the practical steps that boards should be taking.
This blog post outlines five key takeaways from those exciting events for businesses developing or deploying AI.
Key takeaways
1. Increasing pressure on governments to regulate AI. Both events reflected the rising expectations of lawmakers, regulators, consumers and other stakeholders that businesses will take steps to ensure they use AI responsibly, and the growing pressure on governments around the world to regulate to ensure AI is used safely and responsibly. At the Responsible AI Forum we outlined how some countries (eg the UK) have been resisting pressure to regulate, whereas others (eg in the EU) are embracing AI regulation (see the primer prepared for the event that gives a short overview of global AI laws). This year will see elections in many jurisdictions, including the US, EU and UK, so current policies could change quite dramatically in 2024 and businesses will need to keep abreast of rapidly evolving policies across the jurisdictions in which they operate. As we explained in another recent article published by Bloomberg, global businesses must therefore focus on the common regulatory themes that are emerging, and be prepared to adapt to the toughest regulatory regimes where they do business.
2. Businesses need to equip themselves to capture value from AI. The ‘fog’ of regulation can mask other legal and business-process risks that may prevent a business from capturing anticipated productivity gains and returns from AI. For example, it is important that businesses understand the implications of their use of AI in terms of intellectual property, data, employment, contractual and other legal rights and potential liabilities. It is also vital that relevant staff know how to evaluate AI opportunities and risks.
"One reflection I had from the Tortoise Responsible AI Forum was that AI regulation is grabbing the headlines just now, but AI governance is about so much more than mapping the key regulatory principles. Businesses really need AI champions to help spot AI product development that could be high risk, across a range of legal and reputational touchpoints, and to help capture value in data, ontologies, models and other parts of the AI toolkit.”
Giles Pratt, Partner
3. Businesses need plans to address AI. Based on our experience advising on how to navigate the risks mentioned above, key steps for businesses seeking to leverage AI opportunities may include the adoption of AI governance and compliance frameworks and policies. AI governance efforts can be bolstered by designating a group-level AI lead, as well as suitable individuals as ‘AI champions’, to understand and drive impacts across the business, and by appropriate upskilling on AI all the way up to board level. Good governance will include keeping a checklist of the AI activities that could be the highest risk, and embracing a culture that ensures those risks are escalated to a core cross-functional AI steering group.
“As technologies, regulation and stakeholder expectations continue to evolve, it will be crucial that organisations keep their frameworks and policies under regular review and consider how to embed them in practice. At the Responsible AI Forum we heard how some companies are now considering creating AI champions within teams to help embed AI policies within their organisations”.
Beth George, Partner
“At the Responsible AI Forum and at our Freshfields roundtable we heard how businesses are taking varying approaches to the question of whether to introduce new comprehensive AI policies or to apply existing policies to AI.”
Natatha Good, Partner
4. AI talent is highly mobile. We heard how AI talent is migrating around the world, and that the best people don’t necessarily stay to work in the countries in which they studied. That talent is finding its way to a broad range of tech companies as well as many ’non-tech’ businesses, which are hiring AI talent at an impressive pace.
5. AI is becoming increasingly central to business strategies – and AI deployments ever more complex. Businesses are increasingly thinking about AI as central to their business strategy, rather than just as a way of supporting internal processes and driving efficiencies. While procurement of off-the-shelf AI remains a common approach, businesses are also increasingly looking at more bespoke forms of third party collaborations. Those collaborations may bring further legal challenges and risks, as well as enhanced capabilities.
For further information you may wish to read our blog post giving a short summary of further top actions that the legal counsel of an organisation contemplating using AI tools should take.