This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 2 minute read

EU presents draft AI Ethical Guidelines

In its Communications on “Artificial Intelligence for Europe” and “Coordinated Plan on Artificial Intelligence”, the Commission set out its strategy for AI, which aims to:

  • Boost the EU's technological and industrial capacity and AI uptake across the economy;
  • Prepare for socio-economic changes brought about by AI; and
  • Ensure an appropriate ethical and legal framework.

In line with the implementation of the strategy, and as discussed in our blogpost on 3 September, the European Commission’s High-Level Expert Group (HLEG) on AI has been tasked with developing draft AI Ethics Guidelines and Policy & Investment Recommendations. It has been working on both deliverables over the last number of months and held a series of workshops, in which the Firm has participated.

Its first deliverable, the AI Ethics Guidelines for Trustworthy AI, has been published in draft form and is open to public consultation until 18 January. Responses to the consultation must be made via the EU AI Alliance, a platform through which a broader group of stakeholders can feed into the work of the HLEG. Following the consultation, the final guidelines will be presented in March 2019 during the first annual assembly of the EU AI Alliance.

With the draft guidelines, the HLEG aims to set out a framework for Trustworthy AI by providing guidance for:

  • Ensuring that AI is developed, deployed and used with an ethical purpose (i.e. that it is human-centric, based on the EU's values and in line with the Charter of Fundamental Rights of the EU);
  • Realising Trustworthy AI as early as possible in the design phase, including by listing the requirements for Trustworthy AI (i.e. accountability, respect for privacy and robustness) and offering an overview of technical (traceability, auditability and explainability) and non-technical (regulation, standardisation and accountability) methods for its implementation; and
  • Assessing Trustworthy AI, by setting out a preliminary and non-exhaustive assessment list for its operationalisation as well as use-cases (these will be included in the final version of the guidelines).

An earlier leaked draft of the guidelines contained “Red Lines for the application of AI”, namely a non-exhaustive list of applications of AI which “should not happen on EU-territory”. Interestingly, the draft guidelines published instead address “critical concerns raised by AI” and note that the HLEG “did not reach agreement on the extent to which the areas as formulated here below raise concerns. We are thus asking specific input on this point from those partaking in the stakeholder consultation”. The applications of critical concern listed in the published draft guidelines include identification without consent, covert AI systems, normative & mass citizen scoring without consent and Lethal Autonomous Weapons Systems (LAWS).

As alluded to above, the final guidelines will contain use cases, together with a tailored assessment list, for healthcare diagnose and treatment, autonomous driving, insurance premiums and profiling & law enforcement. Stakeholders are invited specifically to share their thoughts on each of the four use cases and, in particular, the sensitivities that they bring forth.

It is worth mentioning that the HLEG is clear in stating that the guidelines “are not intended as a substitute to any form of policymaking or regulation […] nor do they aim to deter the introduction thereof”. The HLEG’s second deliverable, the Policy & Investment Recommendations, will be published in May 2019 and will certainly inform the direction of the Commission’s next legislative mandate (2019 – 2024).

While these guidelines therefore do not intend to take the place of regulatory action, it is clear that the EU aims to leverage its ethical approach to AI to enable its global competitiveness. Indeed, they are addressed to all stakeholders developing, deploying or using AI, they aim to foster discussion on an ethical framework for AI at a global level and they will ultimately contain a mechanism allowing stakeholders to endorse them. Furthermore, the draft guidelines argue that this process “allows Europe to position itself as a leader in cutting-edge, secure and ethical AI. Only by ensuring trustworthiness will European citizens fully reap AI’s benefits.”

Tags

ai, europe, insurtech, automotive, healthcare, cryptocurrency, life sciences