This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minutes read

The EU and AI: A New Interface?

2020 marked the start of the EU’s regulatory advance into the areas of online platforms and big data with the publication of the Digital Services Act and Digital Markets Act. 2021 is now set to be the year the EU extends its reach to the regulation of AI.

When this Commission took office in 2019, it had intended to lead the way with a dedicated proposal on AI in its first 100 days in office. Delays have since followed and there is now a growing sense that the EU needs to legislate to offer an alternative to the American and Chinese approaches to the technology. Regulating AI therefore presents an opportunity for the EU to advance its digital sovereignty agenda and to set another global standard, as it so successfully did via the GDPR. The EU has earmarked €2 billion of investment into AI every year, and will be using the new EU budget and COVID-19 Recovery and Resilience Facility to pour further investment into this space. Ultimately, the EU’s aim is to become a hub of AI research and testing, leading to boosted market penetration in Europe.

Against this backdrop, the Commission’s long-awaited proposal for the regulation of AI is now set to be launched in Q2 of 2021. But what has the EU’s approach to AI been so far, and what can we expect from the EU’s future regulation of this still relatively nascent technology?

What is the EU's approach to AI?

Since the launch of its Strategy on AI in April 2018, the EU has established its approach to AI as ‘human-centric’ – i.e. ‘AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being’. This links to the Commission’s overall ambition of putting people at the centre of data and technological developments – it is proactively promoting this ambition as its fundamental differentiator vis-à-vis the approaches being taken in other major jurisdictions.

Key to achieving this approach are the following elements:

  • Investment. In order to remain internationally competitive, the EU recognises that it needs to increase investment in AI to: (i) boost its technological and industrial capacity; and (ii) increase AI uptake across the European economy.
  • Socio-economic changes. As AI will inevitably transform the labour market, the EU wants to ensure its citizens are not left behind. The EU therefore plans to equip its citizens with the necessary digital skills to master this new technology and to nurture talent for a competitive AI employment market.
  • Ethical and regulatory framework. Fundamental to the ‘human-centric’ approach is the idea of building consumers’ and businesses’ trust in AI. So far, the EU has focused on the ethical side of AI. In July 2020, a non-binding Assessment List for Trustworthy AI was published so that developers and deployers of AI could implement principles – such as transparency and accountability – in practice. The next step in building trust is the development of a clear European regulatory framework for AI.

How does the EU intend to regulate AI?

AI is already subject to existing European legislation such as that on fundamental rights (e.g. data protection, privacy, non-discrimination), consumer protection, and product safety and liability rules. However, certain features of AI (e.g. perceived lack of transparency, partially autonomous behaviour) are not easily captured by existing legislation. The Commission recognised this in its White Paper on AI (published in February 2020) and consequently proposed a specific regulatory framework for AI.

In short, this framework proposes creating mandatory legal requirements for ‘high-risk’ AI applications specifically.

AI applications would be deemed ‘high-risk’ if they are: (i) employed in a sector where significant risks can be expected to occur, such as healthcare, transport and energy; and (ii) used in such a way that significant risks are likely to arise, such as risks of significant material or immaterial damage. There may be exceptional instances where AI applications for certain purposes are automatically considered ‘high-risk’ – e.g. the use of AI for remote biometric identification.

Mandatory legal requirements would then apply to those ‘high-risk’ applications, such as:

  • Training data. Data used to train AI systems would have to respect the EU’s values and rules. This would be ensured via, e.g. requirements that AI systems are trained on sufficiently broad and representative data sets and that adequately protect privacy and personal data.
  • Data and record-keeping. Accurate records of data used to train and test AI systems would need to be kept, e.g. the programming and training methodologies, processes and techniques used to build, test and validate AI systems.
  • Human oversight. The required level of oversight would depend on the intended use and effects of an AI system, but could include validation by a human and/or monitoring.

Those best placed to address any potential risks, such as AI developers, deployers or distributors, who provide AI-enabled products or services in the EU would need to follow these requirements, regardless of whether they are established in the EU or not.

Compliance would then be enforced via a mix of ex ante and ex post mechanisms. Ex ante mechanisms would consist of mandatory conformity / safety assessments – such as testing, inspection or certification. Competent national authorities would enforce such assessments ex post via, e.g. compliance testing. The White Paper does not set out any fining capabilities in respect of any non-compliance, but it does refer to the need for effective judicial redress for parties negatively affected by AI systems.

What's next?

In July 2020, the Commission published its Inception Impact Assessment (IIA) which set out the various options for regulating AI based on the White Paper. 

The options put forward ranged from the baseline position of enacting no legislation (Option 0), to the most stringent approach which would entail a combination of any and all measures indicated in the White Paper (Option 4). These measures include: enacting EU ‘soft law’ to promote industry initiatives for AI (Option 1); an EU legislative instrument setting up a voluntary labelling scheme for AI applications (Option 2); or, EU legislation setting out mandatory legal requirements for all or certain types of AI applications (Option 3).

The feedback period for the IIA ended in September 2020 and saw over 1200 responses submitted from a broad range of stakeholders. These responses indicated a preference for only regulating ‘high risk’ AI where concerns centred on a possible breach of fundamental rights and discrimination. Based on the latest policy discussions, it is expected that the upcoming proposal will take a ‘risk-based’ and proportionate approach incorporating:

  • Identification of ‘high-risk’ AI systems;
  • Mandatory requirements (training data, data and record keeping, human oversight) and ex ante conformity assessments for ‘high-risk’ AI applications;
  • A voluntary labelling scheme for AI applications deemed not ‘high-risk’; and
  • Specific rules for remote biometric identification – e.g. ensuring the use of AI for such purposes is justified, proportionate and subject to adequate safeguards.

Several key questions still remain. For example, on the exact definitions of AI and ‘high-risk’, the nature and extent of the requirements for ‘high-risk’ AI, and the proposal’s relationship with existing and planned sectoral legislation.

It is rumoured the Commission will publish its proposal on 21 April 2021. The other European institutions will then scrutinise the text and make changes. The European Parliament has already indicated that it will be looking for a broader scope, potentially legislating beyond only ‘high-risk’ AI applications, whereas the Member States appear to be more aligned with the Commission’s approach in the White Paper.

Whatever balance the EU ultimately strikes between promoting innovation and developing trust in AI, the Commission’s proposal will mark a first. It is the first attempt of any regulator worldwide to create a dedicated and comprehensive AI regulatory framework and it will likely serve as a blueprint beyond Europe’s borders.


ai, europe, regulatory, eu digital strategy