This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 6 minutes read

Building Your Company’s AI Governance Framework

The use of AI has grown rapidly, from simplistic chatbots to sophisticated conversationalists that are close to passing the Turing test of whether a machine can engage in a conversation with a human without being detected as a machine.  AI can create significant savings and opportunities. As a result, companies in almost all industries are incorporating AI into their products and internal processes.

Companies using or developing AI face both legal (see Generative AI: Five things for lawyers to consider and GenAI: what are the risks associated with its adoption and how do you mitigate those to maximise the opportunities it offers?) and non-legal challenges, including business and reputational risks (e.g., relying on inaccurate AI output that results in harm to the business or its customers, employees, or consumers). 

In the December 2023 IAPP-EY professionalizing Organizational AI Governance Report, 56% of respondents said that their companies do not fully understand the benefits and risks of AI deployment and 57% said that their companies did not control their use of AI.[1] Introducing a program to manage use of AI is one of the top actions that general counsel of an organisation contemplating using AI should take.  This article proposes principles for companies to consider in developing an AI governance program.

Evolving AI Regulation

There is a growing international consensus on the risks, challenges and opportunities posed by AI (e.g., G7 Guiding Principles and a Code of Conduct for AI providers), but there is not yet a consensus as to how to regulate AI.  In the European Union, the Artificial Intelligence Act (AI Act) envisages a risk-based approach to regulate AI systems, from limited to high-risk AI systems/use cases and general-purpose AI models (also called foundation models).  In practice, companies will be required to adopt governance measures to ensure compliance with their new obligations which depend on their role – AI providers, deployers/users, distributors and importers (for further information, see our dedicated Artificial Intelligence Act webpage).  To promote early implementation of these forthcoming measures, the European Commission is launching the “AI Pact” which is a scheme that encourages key EU and non-EU stakeholders to share, on a voluntary basis, their concrete actions and best practices to prepare for the AI Act.  Some countries, such as the UK and Japan, have chosen to introduce principles and guidance instead of legislation—driven by a desire to encourage investment and avoid stifling the development of AI.  Some public bodies and even individual companies have proposed frameworks (e.g., Council of Europe’s Draft Framework Convention on AI, Human Rights, Democracy, and Rule of Law), principles and guidance to manage the safe development and use of AI (e.g., National Institute of Standards and Technology (United States), Personal Data Protection Commission (Singapore) and Office of the Privacy Commissioner for Personal Data (Hong Kong)).  Further, every use of AI has to comply with general (non-AI-specific) laws; in particular, there are restrictions on certain uses of AI under laws such as privacy laws. For example, privacy laws in the US, as well as the EU and UK, provide for certain rights related to processing of personal data for automated decision-making, as well as requirements for businesses to conduct data protection impact assessments for certain processing, including high-risk activities involving personal data.  The onus is therefore on companies to develop their own AI governance frameworks. 

Click here for an overview of current and pending AI regulation in selected jurisdictions. 

Developing an Internal AI Governance Program

Developing an internal AI governance program is critical to help companies navigate the growing mass of AI regulation, meet their legal obligations, and mitigate potential risks.  While each company will want to tailor its AI governance program to its own business, the following elements help to build a strong foundation for an AI governance program.  

  • Decision-making: Effective governance requires clearly defined roles, decision-making processes and accountability.  It may be helpful to assign responsibility for the adoption and use of AI to specific individuals or departments to avoid unclear overlapping responsibility (or responsibility gaps).  
  • Board oversight and reporting obligations: Companies will want to consider how to keep their boards appropriately updated on their development, deployment and use of AI. Experts and external advisors could also be invited to provide feedback. 
  • Dedicated oversight: Companies may benefit from establishing a dedicated body to oversee their use of AI.  The December 2023 IAPP-EY Professionalizing Organizational AI Governance Report found that “60% [of respondents] said their organizations have either already established a dedicated AI governance function or will likely establish one in the next 12 months”.  This body may take responsibility for different activities, for instance:
    • Mapping and monitoring how AI systems are being used (see further details on risk management below); 
    • Laying down recommendations, internal policies and processes for the development, deployment and use of AI;
    • Keeping abreast of technological and regulatory developments; and 
    • Training employees on appropriate use of AI.  

A designated cross-functional group responsible for a company’s AI policy may help to avoid inconsistencies between different departments (such as HR, IT and legal) and other compliance policies (e.g., AI compliance and data compliance, which may overlap).

  • Risk management: Identifying, assessing and controlling risks before using an AI system is important. Risk management is particularly relevant in the following areas—although this ultimately depends on each company’s individual circumstances: 
    • IP rights: Creative industries and software businesses may face greater risks around the use—and potential infringement—of copyrighted material. Companies should understand how their use of AI could implicate IP rights, such as copyright.  This can arise when AI is provided with training data that is copyrighted or otherwise subject to license terms or other restrictions on its use.  Some licenses may restrict the use of information in training data and it is unclear whether using copyrighted information in training data is covered by “fair use” principles and similar copyright limitations.  These issues can also arise in the AI’s output when the output itself mimics copyrighted content.  Users should be aware of where the data they use with AI comes from and any conditions attached to its use.  They should also be aware that generating content with AI can mean that the content is not IP protected.
    • Data protection: AI uses vast amounts of training data to recognise patterns and make predictions.  This can raise privacy considerations when personal data is used as training data.  Companies will need to take measures to ensure that their use of AI complies with privacy and data-protection obligations.  Companies that process sensitive personal data (e.g., health data) may need to manage their use of AI even more carefully to meet data-protection obligations.  Companies should check whether a privacy impact assessment or data protection impact assessment is required before using personal data.  Companies should also consider whether they can mitigate these risks through “privacy by design” or by measures such as using aggregated and/or anonymised data where possible or utilizing privacy enhancing technologies.
    • Consumer protection: Companies that are using AI systems in a consumer-facing capacity should be mindful of consumer protection laws. Companies should check what disclosures and information may need to be provided to consumers around their use of AI (including, as noted above, around the collection and use of data), as well as any related user flows. Companies should also consider what steps to take to monitor the AI system’s performance and outputs on an ongoing basis.  Recently, authorities have started to explore competition and consumer protection issues across the AI value chain.
    • Cybersecurity: Many companies that use AI do not own the AI model.  Rather, they license an AI model from third parties and may run the model on third-party servers (e.g., in the public cloud).  This may entail the transfer of data to third parties and requires careful management.  Regulators are increasingly requiring appropriate management of third parties that process data on a company’s behalf.
    • Worker protection: Companies should be aware of the challenges that derive from AI systems and tools used in recruitment, work allocation, employee monitoring and other alike activities related to the working relationship.  Such systems might expose companies to increased risk of discriminatory decisions towards their workers.  In the recent years, the latter has been a cause for many discrimination claims brought by platform workers asserting having been subject to an automated decision that does not consider their individual situation.  Involvement of workers representatives is another thing which companies should not leave aside. Some EU member states already require consultation with or the consent of the employee representative bodies, when introducing automated machinery or using AI technology.  Furthermore, the AI Act requires that deployers who are employers should inform workers representatives and the affected workers that they will be subject to an AI system, prior to putting into service or use a high-risk AI system at the workplace.

As both the promises and risks of AI become clearer, new legislation, obligations and challenges are likely to emerge.  Adopting an effective and flexible AI corporate-governance program will enable businesses to take advantage of these new opportunities whilst ensuring compliance with the law and managing risks. 

For more on AI and its legal issues, see: https://www.freshfields.com/en-gb/our-thinking/campaigns/technology-quotient/tech-and-platform-regulation/artificial-intelligence-regulation/

 

[1] International Association of Privacy Professionals, IAPP-EY Professionalizing Organizational AI Governance Report (December 2023), page 22.
 

 

Tags

ai, consumer, corporate, cyber security, data protection, employment, eu ai act, europe, innovation, intellectual property, tech media and telecoms