This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minutes read

Regulating AI Globally Mimics a Six-Dimensional Game of Chess

Freshfields attorneys address the global nature of regulating AI and the many factors companies must monitor as governments and regulators advance new policies.

(Article published by Bloomberg Law, 5 March 2024.)

Governments around the world are putting enormous energy into exploring how to leverage AI and mitigate its potential risks. Working out how to regulate AI appropriately looks like a complex chess game, influenced by policymakers’ views of six—sometimes conflicting—dimensions:

  • Potential risks
  • Risk mitigants
  • Who policies should target
  • Safeguarding AI opportunities
  • Perceived urgency
  • Regulatory strategies

Businesses developing or using AI need to watch the whole chessboard as AI regulation plays out, focusing on the key themes emerging from different countries’ strategies. They will often have to implement the rules mandated by the most prescriptive countries where they operate, and engage appropriate experts to supervise that ongoing endeavor as AI regulation continues to evolve.

Potential Risks

In late 2023, 28 countries and the EU signed the Bletchley Park Declaration listing around a dozen potential AI risks. Many of those risks already resonate with existing laws. 

For example, certain AI systems may collect or output data in ways that engage intellectual property, privacy, and contractual rights. However, in an AI context, there is sometimes uncertainty over how existing laws apply and disagreement over whether they’re adequate.

Other types of potential risks are more novel, such as the creation of harmful deepfakes and the risk that individuals are unable to understand the basis on which the AI makes significant decisions.

There are still wide areas of disagreement between countries over the relative importance of potential risks. Despite hopes it would be finalized in 2023, negotiations on the first legally-binding international Convention on AI are ongoing.

Risk Mitigants

Policymakers often seek to ensure that potential risks are mitigated by ensuring:

  • Use of AI is disclosed and explained
  • AI systems are fair, robust, and safe
  • Appropriate oversight of the way AI is being used and clear accountability for outcomes
  • Additional governance of relevant AI models with many possible uses
  • Individuals have ways to contest harmful outcomes or decisions generated by AI

The weight attached to each risk mitigant by policymakers will be influenced by factors such as their ideological views, national culture, and the perspectives of key stakeholders.

Who Policies Should Target

Policymakers must decide which organizations should be subject to interventions. Policies could target the developer of an AI system, the organizations that deploy AI systems or others in the supply chain.

Different actors have varied responsibilities, control, capacities, incentives, and challenges. Policymakers need to understand each piece on the regulatory chess board before making their move.

Policymaking commonly distinguishes between the developers and deployers of AI systems. China’s generative AI law and the draft EU AI Act impose more obligations on developers of AI systems than on organizations that merely use those systems.

Safeguarding AI Opportunities

The potential for AI to enhance societies is widely recognized. Countries are increasingly seeking to encourage AI-related innovation, industries, and jobs, including through financial support, innovation fora, and reskilling initiatives.

Policymakers are likely to be cautious about any action that may inhibit AI opportunities or perceived national advantages. For example, France and Germany lobbied for changes to the draft EU AI Act to protect their AI start-ups. The US and China are geopolitical competitors and keenly aware of the potential of AI.

Perceived Urgency

The perceived urgency for policy interventions heavily influences strategies.

The EU institutions believe certain AI poses significant risks that need to be quickly addressed. The EU is therefore putting in place comprehensive AI-specific laws, institutions, and enforcement mechanisms.

The UK government, in contrast, takes the view that “introducing binding measures too soon…could fail to effectively address risks, quickly become out of date, or stifle innovation.” The UK’s “pro-innovation” approach starts from the premise that any urgent risks are addressable by current laws, and that our understanding of AI risks isn’t sufficiently mature to regulate effectively.

The perceived urgency for action may change over time, as new evidence emerges. Swiftly following a high-profile deepfake imitating President Joe Biden, the US Federal Communications Commission issued a declaratory ruling last month addressing AI technologies that generate human voices.

Regulatory Strategies

The different strategies for regulating AI are an important part of the chess game. Available policy tools, include:

  • New general and wide-ranging AI-specific laws (e.g., the draft EU AI Act)
  • New laws targeting specific AI use cases or technologies (e.g., China’s laws on generative AI, deep synthesis and recommendation algorithms)
  • Repurposing or updating existing laws that are technology and sector-neutral (e.g., privacy laws) or that govern specific sectors (e.g., financial services)
  • Voluntary codes and standards
  • International action

The type of tool used can make a significant difference to businesses. In particular, the more wide-ranging, mandatory, and prescriptive the regulation, the less space is left on the board for the AI innovators.

Governments that regard AI legislation as undesirable (such as the UK) or who may be unable to agree on a path forward (such as the US government), often focus on obtaining voluntary commitments from key players or providing guidance for regulators or businesses.

In July 2023, eight major tech companies signed up to a set of voluntary principles championed by the Biden Administration, with other companies joining additional voluntary commitments afterwards. The UK has established an AI Standards Hub to advance international standardization efforts.

The opportunities and risks of AI are global and require collaborative thinking to make progress.

The last year saw extensive inter-governmental cooperation on AI, including ongoing work on evaluating potential safety risks, such as through the 2023 global AI Safety Summit held in the UK and work at the G7. Policymakers generally recognize that watching, and learning from, each other’s strategies, will ultimately strengthen their own game.

Highly divergent national approaches to AI regulation risk burdening the AI economy, and potentially inhibiting innovation. Global businesses developing or deploying AI will need a coordinated team of experts to play this game of AI chess across all the regulatory dimensions.

The likely winners will keep a close eye on the whole board, while finding ways to cut through, focusing on the key themes emerging from the different types of regulatory intervention.

For more on AI regulation: www.freshfields.com/ai.

Reproduced with permission. Published 5 March 2024. Copyright 2024 Bloomberg Industry Group 800-372-1033. For further use please visit https://www.bloombergindustry.com/copyright-and-usage-guidelines-copyright/ 

Tags

regulation, risk, ai, intellectual property, data