In light of the release of a coordinated action plan (the Action Plan) by the European Commission (the Commission) in late 2018, we look at what to expect from the use and development of artificial intelligence (AI) in Europe in the year ahead.
The Action Plan
The Action Plan is the product of a collaborative enterprise between the Commission, the EU Member States, Norway and Switzerland, in line with the commitment the Commission made in April 2018 to deliver a European AI strategy.
The plan focuses on steps to be taken during 2019 and 2020, which include “a set of concrete and complementary actions at EU, national and regional level” to make Europe “the world-leading region for developing and deploying cutting-edge, ethical and secure AI”.
Alongside the Action Plan, the Commission is encouraging Member States to put in place national AI strategies, outlining their investment and implementation plans. Germany, France, Sweden, Finland and the UK have already adopted strategies of this type.
Maximising investment through partnerships
The Action Plan recognises an urgent need to address the “low and fragmented” nature of existing investment in AI within the EU. It calls for a joint effort from the Commission, Member States and the private sector “to facilitate and reinforce investment” and it sets out the Commission’s aim to increase public and private spending on AI across the EU to at least EUR 20 billion per annum by the end of 2020. The Commission will allocate EUR 1.5 billion of EU funds by the end of 2020, in addition to national investment.
From the lab to the market
The Action Plan commits to invest in research, and sets out a plan to establish “world-reference testing facilities”. In order to facilitate the transfer of research results to industry, the Action Plan proposes the use of Digital Innovation Hubs (DIH). These will act as a ‘one-stop shop’ for companies, particularly SMEs, to access new technology and training.
In 2019 and 2020, the Commission plans to make available more than EUR 100 million for DIHs in areas relevant to AI, including ‘big data’, and, beyond 2020, it is envisaged that up to EUR 900 million will be invested to support the development of hubs in each Member State.
Skills and life-long learning
In recognition of the AI skills gap in businesses, the Action Plan sets out the need for skills-based learning. The Commission will support advanced degrees in AI and will introduce appropriate provision within the Blue Card Work Permit system to allow EU-based enterprises to attract and retain talented individuals.
European data space
The collaborative cross-border activity envisaged by the Action Plan includes plans to aggregate data across Europe, while ensuring full compliance with applicable legislation, including the General Data Protection Regulation (GDPR). This will be facilitated by the creation of a common ‘European Data Space’, “a seamless digital area that will enable the development of new products and services based on data”.
The Action Plan’s proposals for a seamless flow of data would have particular benefits for the health sector. The Commission proposes to use AI for two health-related initiatives:
- linking genomics repositories across Europe; and
- creating a common database of anonymised health images which will be dedicated initially to the most common forms of cancer.
Ethics and regulation
The Action Plan recognises that in order for citizens to trust AI and for companies to take up new business opportunities with investment security, AI needs to be developed in line with an appropriate ethical and regulatory framework.
As such, the Commission has tasked a High-Level Expert Group on AI to draft ethics guidelines (the Guidelines) and to put forward policy recommendations for a new regulatory framework. As reported on our Digital Blog, the Draft Ethics Guidelines for Trustworthy AI were released on 18 December 2018 (ahead of the publication of a final version which is expected in March 2019). For more information on the Guidelines and ethical AI, see our January 2019 blog post on Artificial Ethics: Evolving guidelines to help thinking machines make the “right” choices.
The Commission has identified the need to create a legislative framework which encourages AI innovation while ensuring that there are effective safeguards in place. A key concern for businesses is cybersecurity, and the Commission intends to boost EU-wide cybersecurity capabilities to ensure consumer protection and effective victim redress. At the end of 2018, a political agreement was reached by the European Parliament, the Council of the EU and the Commission on the new EU Cybersecurity Act, although the proposal remains to be formally adopted. Other important legal concerns which the Commission proposes to address include data privacy and compliance with competition law.
Notwithstanding the need to develop appropriate safeguards, the Commission also intends to provide regulatory authorities with “a sufficient margin of manoeuvre”, including through the use of “regulatory sandboxes”. These ‘sandboxes’ would allow companies to benefit from a lighter touch regulatory regime when testing AI products. The Action Plan encourages Member States to enable “companies that are developing AI applications to discuss the specific needs for the creation of such environments and testing arrangements”. However, the Commission’s proposals do not elaborate on the nature of any specific legal or regulatory requirements which could be relaxed or the specific circumstances in which any exemptions might be available.
Challenges for Businesses
The Action Plan is a clear step forward in the advancement of AI in Europe, and lays the groundwork for coordinated action over the next decade. The Commission believes that AI will be “the main driver of economic productivity and growth”, but the Action Plan also recognises that such advancement presents significant challenges for businesses, including:
- Complexity – for new AI to be successfully deployed in the market, businesses will need to upskill and reskill their workforce. The Action Plan identifies the issue of an ICT skills gap, particularly given the complexity and unfamiliarity of AI, and the pace of advancement;
- Employment disruption – a key concern about AI innovation is that it will disrupt the EU labour market. The High-Level Expert Group is due to deliver a report in spring 2019 which will consider strategies to deal with the impact of digital transformation on employment;
- Security – further legislation will be needed to bolster cybersecurity infrastructure. For some businesses, the use of AI security products may become compulsory in order to provide effective mitigation against risks. A key principle of AI development will be “security by design”, whereby cybersecurity, the protection of victims and the facilitation of law enforcement activities will be embedded from the beginning of the design process; and
- Compliance – following in the wake of the GDPR, the focus in 2019 will be on assessing whether, and to what extent, Europe’s current regulatory framework is fit for purpose given the fast pace of AI-induced change. The Commission is due to publish a report on regulatory gaps by mid-2019, following which it appears likely that we will see further changes made to the regulatory landscape. Businesses should be alert to changing legislation which may affect their operations.
While the Action Plan provides many reasons to be hopeful, the Commission will need to balance the tension between appropriate consumer protection and other safeguards, and the need to support and incentivise industry. Whether this can be achieved successfully will only become apparent once detailed legislative proposals are available. This is still likely to be a number of years from realisation.
For more information on AI, visit our ‘AI’ hub which explores the rapidly developing technology and how it intersects with regulation and the law.