The publication by the Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin) of a discussion paper titled Big data and artificial intelligence: Principles for the use of algorithms in decision-making processes and a consultation paper Machine learning in risk models – Characteristics and supervisory priorities demonstrates the ambitions of Germany’s financial watchdog to supervise the use of artificial intelligence (AI) and big data in the financial services sector.

BaFin regards AI as the combination of big data, computing resources and machine learning. With machine learning, computers are given the ability, via special algorithms, to learn from data and experience. This is contrasted to rules-based processes in which a programmer determines how and which results are to be achieved using certain data sets.

How might the financial sector use AI and machine learning?

In its discussion paper, BaFin gives the following examples of AI use in the financial sector:

  • Motor insurance companies assessing driver data, such as speed and location, in order to determine insurance premia.
  • Credit institutions determining corporate credit ratings and default risk by using natural language processing to analyse annual reports.
  • Asset management companies using algorithms to make investment (and divestment) decisions.

The consultation paper also considers how AI and machine learning may be used to calculate regulatory own-funds requirements (so-called Basel Pillar 1) and manage risk (Basel Pillar 2).

Using algorithms: BaFin’s four basic principles

The discussion paper sets out key principles for the use of algorithms in decision-making. BaFin says these constitute preliminary ideas for ‘minimum supervisory requirements relating to the use of artificial intelligence’.

The four principles are to:

  • have clear management responsibilities for the design and application of AI;
  • have appropriate risk and outsourcing management processes;
  • ensure the results of algorithm-based decision-making processes are not systematically biased; and
  • not use types of differentiation that are prohibited by law, such as, potentially, gender‐based pricing in the insurance sector.

Non-compliance with these rules may create legal and reputational risks.

The interaction between humans and AI processes is a recurring theme in the two publications. Based on the principle of ‘putting the human in the loop’, BaFin expects people to be sufficiently involved in the interpretation and use of AI-based outputs. For instance, if AI-based credit ratings differ significantly from results using other (established) processes, the final decision should be made by a human.

What regulatory status do these publications have?

The discussion paper is another example of BaFin’s approach to touch upon technological innovations by non-binding guidelines for regulated entities. Any existing law or administrative practice that is stricter than the guidelines are unaffected.

The consultation paper itself has no regulatory status. However, the results of the consultation will influence BaFin’s future approach to the supervision of machine learning, which may result in the regulator having to approve certain algorithms. This includes machine learning used in internal models  to calculate regulatory own-funds requirements (Basel Pillar 1) and manage risk (Basel Pillar 2).

How do the publications fit into the broader discussion on the regulation of AI?

BaFin’s discussion paper overlaps with the EU Commission’s draft legislative proposal on AI (‘the proposed AI Regulation’), which was published in April 2021 and kicked off the discussion how to govern AI across the EU (find out more in our briefing).

Like BaFin, the proposed AI Regulation takes a risk-based approach that imposes different regulatory requirements depending on the level of risk to fundamental rights and safety, while banning certain particularly harmful AI systems outright. Permitted high-risk AI systems should be overseen by humans, which should reduce the risk of erroneous AI-assisted decisions, and help respect fundamental rights and safety.

An example of such a high-risk system is using AI to evaluate an individual’s credit score or creditworthiness (although systems developed by small-scale providers for their own use are exempt). In addition to other requirements, these AI systems will need to undergo a validation process before, during and after development that is proportionate to the size of the provider’s organisation.

The proposed AI Regulation also requires compliance with specific documentation obligations. To ensure that the results of an algorithm can be reproduced, in line with the expectations of the BaFin, the proposed AI Regulation says that high-risk AI systems should enable the automatic recording of events (‘logs’) while they are operating.

In addition to the principles set out by BaFin, to ensure that algorithm-based decision-making processes are not systematically biased or discriminatory, the proposed AI Regulation says that AI systems may process special categories of personal data referred to in Article 9(1) of the EU General Data Protection Regulation, which include data on racial or ethnic origin, religious belief and biometric data.

Regarding credit institutions, the proposed AI Regulation refers to the Capital Requirements Directive (CRD), under which credit institutions must implement robust internal governance arrangements. Such arrangements must recognise obligations under the proposed AI Regulation.

Furthermore, the proposed AI Regulation requires the provider of an AI system to carry out a conformity assessment that shall become part of the supervisory review and evaluation process in accordance with the CRD. By following this approach, the EU Commission intends to establish consistency between the initiatives.

Finally, the proposed AI Regulation foresees that the authorities supervising the EU’s financial services legislation (including, where applicable, the European Central Bank) should also supervise compliance with the proposed AI Regulation as it applies to regulated entities. Generally, this should be welcomed by financial institutions as enforcement would be carried out by a single authority.