Co-author: Rachel Duffy
Last week, the European Commission published an important paper on artificial intelligence and robotics. Its communication on “Artificial Intelligence for Europe”, released on 25 April 2018, describes AI technologies as being as transformative as the steam engine or electricity. They will, it is said, help solve some of the world’s biggest problems, from chronic disease, to climate change, to cybersecurity threats.
The communication sets out three pillars of a proposed integrated approach to AI across Europe: keeping ahead of technological developments and encouraging uptake of AI by the public and private sectors; preparing for the socio-economic changes brought about by AI; and ensuring an appropriate ethical and legal framework.
Boosting Europe’s technology and industrial capacity in AI
As a starter, the EU has announced additional funding for AI-related projects via its Horizon 2020 research and innovation programme, with up to €500 million extra available each year between now and 2020. These funds can be bid for by both the private and public sector. Combined with public and private sector investments, the Commission wants the EU as a whole to invest at least €20 billion in AI by the end of 2020.
The Commission has also announced support for a new “AI-on-demand platform”. This will provide businesses with a single access point to connect to relevant AI resources in Europe, such as data repositories, (cloud) computing power and algorithms. The platform is also intended to help potential users of AI understand whether and how AI could be integrated into their business models. To facilitate access to the platform, the Commission will also create a network of over 400 digital innovation hubs focused on AI, drawing on the expertise of the European AI community.
Longer-term plans focus on R&D funding and supporting the adoption of AI by SMEs, start-ups and other organisations, across all sectors. Of particular interest is the proposal for a “regulatory sandbox” to facilitate testing of AI by organisations across all sectors. This, in essence, is a framework set up by a regulator that allows businesses (typically start-ups) to conduct live experiments with real consumers, in a controlled environment under regulatory supervision. The concept has been successfully trialled in the UK financial sector by the Financial Conduct Authority (further details here). However, the Commission’s proposals do not elaborate on how an AI regulatory sandbox would operate in practice.
New legislation to improve data sharing
The Commission recommends that “public policy should…encourage the wider availability of privately-held data, while ensuring full respect for legislation on the protection of personal data”. It envisages that opening up data in this way will drive progress for AI applications in sectors such as transport and health. To this end, it has also proposed revisions to the Directive on the re-use of public sector information (here), recommendations on the preservation of scientific data (here) and guidance on sharing private sector data (here). These will complement its proposals for a Regulation on the free-flow of non-personal data throughout the EU.
The UK House of Lords’ recent publication of the findings of its Select Committee on Artificial Intelligence, which my colleague Sam discussed in a briefing last week, also highlighted the value of publicly-held and open data sets.
Tackling socio-economic challenges in the labour market
Last November, a report by McKinsey predicted that automation and AI would bring massive change to the global jobs market by 2030, with around 50% of current work activities being capable of automation by adapting technologies that already exist today.
The Commission’s report is optimistic about the impact that AI will have on the labour market and society as a whole. Whilst it acknowledges that some jobs will disappear, new jobs will be created to develop and maintain machine-learning algorithms, and AI could make remaining workers more productive. The Commission emphasises the need to improve the digital literacy of society as a whole, as well as to provide “up-skilling” opportunities for workers whose jobs are most likely to be affected.
Addressing new ethical and legal issues
The Commission’s report notes the importance of trust and accountability, as well as a predictable legal environment, in order to ensure that the EU remains competitive and innovative in relation to AI whilst retaining high standards of respect for fundamental rights and safety.
As a first step, the Commission proposes the establishment of a European AI Alliance by July 2018. The Alliance will bring together relevant stakeholders to draft guidelines on AI ethics by the end of 2018. These guidelines will address a range of issues, including safety, security, algorithmic transparency and consumer protection, helping to ensure that key values (such as democratic principles and respect for fundamental rights) are embedded into both the development and use of AI solutions.
There are also a number of legal developments on the horizon beyond those I have already discussed. These include a review of both horizontal legislation and sector-specific rules, for example to address circumstances where AI / “internet of things” applications may act in unforeseen ways. It is, for example, creating new expert groups to look at whether changes are needed to the Product Liability Directive. The Commission also notes that it will “closely follow” the application of GDPR in the context of AI applications, and that that it may also need to review consumer protection legislation, particularly to deal with the impact of AI in business-to-consumer transactions. However, once again, the paper is light on specific details.
Looking forward
The Commission’s paper attempts to set out a clear timeline to achieve the goal of developing an “integrated and comprehensive European initiative on AI”. The timetable for doing so is short: much of the key foundational work is targeted for 2018-19, including the establishment of the AI Alliance by July 2018, which is in turn expected to publish (at least) draft guidelines on AI ethics by the end of this year. The paper is also sometimes light on detail. We will need to wait and see exactly how the promised “coordinated approach” to AI across Europe will look.