This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minutes read

Navigating AI Governance

Background

AI is transforming industries. Companies around the world are accelerating their organisational pace to adapt and integrate AI into their business models. Like most key technologies, AI is not limited to enabling incremental progress, such as improving the efficiency of existing processes. Rather, it has the potential to fundamentally change processes and entire business models.

While AI offers a wide range of unprecedented opportunities, its implementation into business models and organisational set-ups needs to be carefully managed from both a business and legal perspective. This raises a number of issues that go well beyond the typical scope of legal departments. Given the transformative nature of AI, a holistic approach is essential.

The key challenges

Three key challenges inherent in the implementation and use of (generative) AI models and systems drive the need for AI governance:

  • Building trust: Generative AI models are stochastic rather than deterministic. As a result, generative AI models may produce factually incorrect output (hallucination), generate biased content, lack domain-specific expertise, or produce outdated results. The output of AI systems can also become inaccurate over time (model drift), particularly due to changes in the statistical properties of the independent variable (data drift) or the dependent variable (concept drift). For more details see Generative AI: Opportunities and Limitations in the Legal World | Part I: Large Language Models. Against this backdrop, confidence in the technology needs to be built on a sustainable basis if businesses and end users are to be able to use generative AI for high-impact tasks. AI governance can be a powerful tool to help achieve this.
  • Mitigating risk: (Generative) AI models and systems are subject to a myriad of legal requirements stemming from pre-existing rules and regulations. This applies to all actors in the value chain, including developers, customers and end users. Key issues include (i) ownership and use of IP and data, (ii) regulatory compliance (e.g. privacy, employment, anti-discrimination, antitrust and consumer protection), (iii) cybersecurity and (iv) general liability risks (e.g. under customer or supplier contracts). For more details, please see GenAI: what are the risks associated with its adoption and how do you mitigate those to maximise the opportunities it offers?. In addition, legislators around the world are working on regulatory frameworks specifically for AI. In particular, the EU Parliament recently adopted the AI Act, which is expected to enter into force in the coming months after formal approval by the Council. The AI Act is based on a risk-based approach, with a focus on ‘high-risk’ AI systems as well as general purpose AI systems. Providers of high-risk AI systems need to meet a range of requirements, ranging from risk management systems, data and data governance, technical documentation, record keeping, transparency, human oversight, accuracy, robustness and cybersecurity. They must establish a comprehensive quality management system to ensure compliance with the AI Act, including, in particular, a compliance strategy, procedures for design, development and quality control, and an accountability framework that defines management and staff responsibilities for all aspects of the quality management system. The AI Act also contains a system of potentially severe fines. Although the requirements are different for non-‘high risk’ AI systems, a diligent approach to regulatory compliance is imperative in relation to any AI system. For more details on the AI Act, see the Briefing on the Freshfields EU Digital Strategy Hub. For an overview on AI regulation globally, see Regulating AI Globally Mimics a Six-Dimensional Game of Chess and The 2024 Responsible AI Forum: Making the rules for AI. In order for AI systems to be deployed in a commercially viable way, risks (in particular those arising from legal requirements) need to be mitigated. AI governance provides a comprehensive framework and specific tools for this purpose.
  • Enabling innovation: (Generative) AI is a step change and a huge opportunity for any business to create new revenue streams and increase productivity. To realise its full potential, both at an individual and macro level, a supportive environment that encourages experimentation and development while applying a commercial mindset is key. As a result, a successful AI governance system must enable a dynamic, efficient and innovative ecosystem with as few bureaucratic barriers as possible.

Bringing AI governance to work

Successful AI governance addresses the key challenges associated with AI adoption by creating an environment that builds trust in AI systems, mitigates and contains inherent risks, while enabling innovation. The implementation of AI governance systems follows a three-step process:

  • AI contextualisation aims to locate AI within the value creation process of the specific business and organisation. Key questions to answer as part of the contextualisation process include: What kind of AI are you using? Is AI changing your business model or is it "just another" (albeit powerful) driver of process efficiency? What is your organisation's role in the specific AI system? How is your organisation set up from an AI perspective?
  • AI risk mapping aims to identify the key business-specific risks from an AI governance perspective. Risk mapping builds on the preceding contextualisation and highlights the most pressing risks along the value chain to be addressed against the backdrop of the specific contextualisation. Risks are prioritised based on likelihood of occurrence, potential impact and cost of mitigation.
  • AI risk management aims to mitigate and contain AI-related risks based on the individual AI risk map. The toolset is diverse and includes, in particular, organisational design (e.g. who is responsible for AI?), talent acquisition and retention, value sets, design principles, standardisation of processes and/or contractual clauses and key principles of engagement (e.g. relating to model interoperability, allocation of liability, as well as model behaviour and maintenance), operational or documentation requirements. Depending on the specific AI risk map, AI risk management may also include tailored ad hoc preventive measures, such as due diligence of supplier or customer contracts with respect to clauses on the use of AI, data and IP. For more details see Building Your Company’s AI Governance Framework.

Clearly, there is no 'one size fits all' approach. AI governance needs to capture, reflect and potentially even influence the specific business model and organisational set-up. Many of the challenges associated with AI require a cross-functional set-up, often including IT/cybersecurity, legal, data, procurement, strategy and HR. As a result, it is important to break down silos and establish the right forums (such as cross-functional boards) to enable effective and efficient communication and approval processes. While some elements of AI governance may be legally required (e.g. as part of compliance with the AI Act), AI governance goes well beyond compliance and aims to mitigate risk on a broader scale while enabling value creation.

Key takeaways

  • The implementation of AI into business models or organisational set-ups is challenging and needs to be carefully managed.
  • AI governance provides a tailored set of tools to build trust, reduce risk and enable innovation around AI. 
  • Building a robust, resilient and efficient AI governance system requires excellent legal and technical skills, as well as a deep understanding of the relevant business model.

 

Tags

ai, eu ai act, eu digital strategy, europe, innovation, regulatory, corporate