This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 3 minutes read

A year on from the FCA’s Discussion Paper on Artificial Intelligence and Machine Learning: PRA Feedback Statement 2/23

Last week, the PRA published its Feedback Statement to the Bank of England, the Prudential Regulation Authority (the PRA) and the Financial Conduct Authority’s (the FCA) Discussion Paper 5/22 (DP 5/22) on Artificial Intelligence (AI) and Machine Learning, which was released just over a year ago. The Feedback Statement does not include any policy or regulatory proposals, but it does provide a summary of the feedback provided in the 54 responses that were received, and simultaneously identifies a number of themes that emerged through that feedback. 

 

Defining AI 

AI is notoriously difficult to define and it should therefore be no surprise that most respondents to DP 5/22 thought that a regulatory definition of AI would not be helpful. Instead, most respondents were of the view that a technology neutral, outcomes-based and principles-based approach would be preferred. This would enable existing approaches to financial services regulation to be leveraged, provide the flexibility required to adapt in the face of fast-paced changes in the sector and ensure that a proportionate balance could be struck between managing risks and supporting innovation. 

 

Consumer protection and outcomes

It was suggested that the regulatory focus should be on consumer protection and the outcomes affecting consumers and markets, rather than being on any particular form of technology. This is in light of the potential risks that may arise through the use of AI, including risks of bias, discrimination, lack of explainability, transparency and exploitation of vulnerable consumers or consumers with protected characteristics. While the ability for AI to mitigate consumer harm was recognised, for example through better identification of unfair or discriminatory outcomes, most respondents considered that the consumer harms associated with AI originate mostly from the data underpinning it, and further that data bias, and unavailability of sufficient key data, are key drivers in consumer harm. Metrics‑focused consumer outcomes were thought likely to be the most useful in assessing the benefits and risks of the use of AI, and engagement with industry was thought to be key to establishing which metrics are most appropriate when considering fairness for consumers, alongside data and model performance and explainability metrics.

To mitigate the potential harms to consumers, Respondents suggested that firms should focus on strategies to mitigate data bias, such as addressing data quality issues, documenting biases in data and capturing additional data where that is helpful in highlighting the impact on particular groups with shared characteristics.  The authorities could further mitigate consumer impact by releasing guidance to clarify regulatory expectations, which could provide further guidance on the interpretation and valuation of good consumer outcomes in the AI context with a view to existing regulations (such as the FCA’s Consumer Duty). Respondents also proposed that guidance could be supported by examples of what is ‘best practice’. 

 

Governance risks

Respondents noted the governance risks which may arise for firms in the event that there is a lack of sufficient skills and experience to support the level of oversight required to ensure technical and non-technical risk management, which may be exacerbated by the use of third-party AI software solutions. These risks will only amplify as a result of the increasing complexity of, and dependence on, the models that are emerging and the underlying data being used. While most respondents did not think that a new Senior Manager prescribed responsibility for AI should be introduced, some respondents thought that further practical or actionable guidance on how to interpret the ‘reasonable steps’ element of the Senior Managers regime in an AI context would be helpful.

 

Systemic and market risks 

The potential for the speed and scale of AI to create new forms of systemic risks was recognised in the paper, including as a result of interconnectivity between AI systems and the potential for AI-induced firm failures, as were the potential risks for financial markets, for example through (i) new forms of market manipulation, (ii) the use of deepfakes for misinformation, (iii) third-party AI models resulting in convergent models including digital collusion or herding, and (iv) amplification of flash crashes or automated market disruptions. 

 

What comes next 

As noted above, no policy or regulatory proposals were put forward in the Feedback Statement. The paper did however note that some respondents thought that additional guidance could be helpful. In particular, it was suggested that further regulatory guidance may help to address regulatory uncertainty, facilitate effective competition and promote innovation, but that it would be important for there to be cross-sectoral and cross-jurisdictional coordination in its development in order to ensure coherence and consistency in regulatory approaches, as well as ensuring that UK regulation does not disadvantage UK firms and markets. This suggestion was framed by the challenges faced by firms in light of increasing fragmentation of regulation, which was another key theme of the responses received. 

Accordingly, the questions and concerns posed by respondents will be influential in guiding the shape and form of new regulatory developments on AI.  It can be anticipated that there will be more to come from the UK regulators following the release of this Feedback Statement. 

Tags

ai, financial institutions, fintech, regulatory