This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minute read

A chance for AI? Germany greenlights fully automated customer identification

When it comes to the remote identification of clients for Anti Money Laundering (AML) purposes, Germany has taken a quite unique stance. Unless an identification document with eID is used, only a live video identification is sufficient for compliance with the regulatory requirements for remote identification. This procedure has often been criticized by market participants as too burdensome but also as not sufficiently safe. The German government has now reacted and proposed a draft ordinance, the Money Laundering Video Identification Ordinance (GwVideoIdentV). The ordinance, if adopted, will also enable AI-based solutions, which may significantly reduce onboarding costs for organisations.

Germany’s current approach on remote identification

Organisations in the financial sector are legally obliged to identify third parties and verify the information provided to them, when entering into a business relationship with them. According to BaFin’s administrative practice, remote identification (unless eID is used) is currently possible by means of a video identification only. According to the Federal Financial Supervisory Authority (BaFin), such verification must be carried out by employees trained for this purpose. Only then, it is assumed that the procedure corresponds to the physical presence of the person. For this reason, it is currently not possible to solely rely on AI-based solutions or other automated applications in this process. 

Proposed rules for fully automated procedures

The GwVideoIdentV provides for the use of fully automated procedures. In a fully automated procedure, not only the recording, but also the analysis of such recording takes place without human involvement. These systems are permitted on a test basis. The Federal Office for Information Security (BSI) – and not the AML regulator – is competent to check whether any AI applications used function in such a fault-free manner that they are comparable with the actions of a natural person.

This review is designed as a two-stage application process:

  • In the first stage, the organisation must submit an application to the BSI to test the procedure over a trial period of two years. Within six months of the application being submitted, the BSI will test the procedure and check its level of security against the state of the art. If the BSI comes to the result it is not ruled out from the outset, that the procedure has a level of security equivalent to that of a non-automated video identification system, the BSI will allow the procedure to be tested for a maximum of two years. Furthermore, the procedure must not be used to identify persons for whom there are indications of a higher risk of money laundering or terrorist financing,. The procedure may no longer be used during the trial period if it subsequently becomes apparent that a comparable level of security is not possible, or if there are indications that the obliged party (or the third party instructed) does not have the necessary suitability or reliability to fulfil the requirements on a permanent basis, or if the obligations under the GwG are not fully met.
  • In the second stage, an application for permanent use of the procedure must be submitted to the BSI. By the end of the test period at the latest (max. two years), it must be verified whether the procedure is suitable for verifying the identity and has a level of security comparable to a non-automated procedure. As opposed to the first stage test which is passed when an equivalent level of security is merely not ruled out, the BSI must now come to the positive conclusion that the procedure has in fact a level of security equivalent to that of a non-automated procedure. If this is the case, the procedure can continue to be used without testing. Even in this case, however, the BSI has the power to subsequently prohibit its use under the same conditions as for testing. How the first stage and second stage test will differ in practice remains to be seen. 

Further requirements for the use of AI?

The recently adopted EU AML Regulation seems to recognise the potential use of AI for KYC (Know Your Customer) purposes. In line with related obligations under the EU GDPR and the EU AI Act, organisations may adopt KYC decisions resulting from processes involving AI systems if the processed data is limited to data obtained during the KYC process, and decisions are subject to meaningful human intervention to ensure their accuracy and appropriateness. Furthermore, customers must be informed about any such use of AI systems and have the right to challenge the decision. The GwVideoIdentV will enter out of force as soon as (1) the AML Regulation enters into force and (2) the future European AML authority, AMLA, issues their own guidance on KYC processes. Until this occurs, the GwVideoIdentV may still remain relevant as German rule specifying the EU AML Regulation.

Considering the specific functions of the respective software application, organisations will need to carefully determine whether AI-based KYC systems fall under the EU AI Act triggering additional compliance obligations. In fact, the EU AI Act generally qualifies “remote biometric identification systems” as high-risk AI systems. However, even though identifying whether, amongst other things, the individual in front of the screen is the same person as on the ID document will likely qualify as “biometric identification” as defined under the EU AI Act, there may be good grounds to argue that such systems should not qualify as “remote biometric identification systems”. These are understood as AI systems for the purpose of identifying natural persons without their active involvement – typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference. The EU AI Act also explicitly clarifies that AI systems used for biometric verification for the sole purpose of confirming that a specific natural person is the person he or she claims to be, shall generally not be considered high-risk AI systems. Subject to an individual review of the individual KYC application, these aspects arguably allow organisations to claim that KYC applications are not “remote biometric identification systems” as addressed in the EU AI Act. Nevertheless, it remains possible that the KYC applications have implemented chatbots triggering the applicable transparency requirements under the EU AI Act.

Tags

ai, cyber security, eu ai act, fintech, gdpr, innovation, regulatory