The House of Lords Select Committee on Artificial Intelligence has published its report considering the economic, ethical and social implications of advances in artificial intelligence. The committee issued a public call for evidence in July 2017, focusing on seven topics detailed in a previous blog post here.
The report was published on 16 April 2018. It focuses on the effect of AI on everyday life; the opportunities, risks, and ethical issues raised by AI; and how the public might best be engaged with regards to AI. We’ve picked out a few interesting points below.
The definition of AI – is it sufficient?
The report noted the lack of a consistent definition of AI; a common feature of newly developed technology. The Committee chose the following definition:
"Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation."
This is an interesting choice. It defines AI in relation to human cognitive ability, and in doing so seems to omit one of the most disruptive aspects of AI: using machine learning on vast data at a scale that no human could achieve. This is particularly important if this definition influences the development of UK law on AI: it may not be equipped to deal with the most disruptive aspects of AI and thus those with the greatest need of regulation. This definition was itself taken from the government's Industrial Strategy White Paper, indicating that it already has some influence within government.
AI and big data
The report also dealt with questions surrounding AI and data protection. The Committee comments that data is a key resource in realising the potential of AI. Many of the most valuable tasks that AI can perform require the input of large datasets.
The Committee notes that concentrated ownership of large, high-quality datasets with a few international technology companies is a problem. This restricts access to data and prevents important sectors of society benefitting fully from AI. The Committee is particularly concerned about the public and charitable sectors losing out from the benefits that AI could bring. It also notes that SMEs need access to big data to remain competitive in an increasingly technological world. Allowing the current large technology companies to retain monopolies on big data threatens competition in wider markets.
The Committee recommended establishing 'data trusts' where information can be shared responsibly and ethically between organisations. This would reduce the barriers to entry for using AI, and therefore allows all of society to benefit from advances in AI.
Big data versus privacy
The report comes at a time when the rights to privacy and personal data are also at the forefront of the political agenda. Protection of these rights comes into direct conflict with the ready access to big data that AI requires. Therefore, if the power of AI is to be fully harnessed, a balance between these rights needs to be struck.
Anonymising data will help, as it protects the personal aspect of data whilst still leaving enough useful information in a dataset for use by AI. In this way, the provisions for anonymization in the incoming GDPR may not overtly hinder the UK's use of AI. The Committee also notes the need for 'data trusts' to be appropriately anonymized. If their recommendations are followed, this indicates an even greater role for anonymization than that currently envisaged under the GDPR.
One question this raises is whether the usefulness of large datasets to AI can be preserved through anonymization, or whether removing personal aspects of data necessarily inhibits the ability of AI to detect useful patterns.
Legal liability and AI
The Committee also addressed concerns over legal liability. There were several public submissions on this, but most can be summarised by the question: 'who is accountable for decisions made or informed by AI?' The Committee didn’t reach an answer on this, and recommended that the Law Commission considers whether the law needs changing. However, the Committee and various contributors raised some interesting issues.
The report noted that the UK’s legal system establishes liability based on reasonable standards of decision-making and the foreseeability of an event. This is inappropriate for AI due to the complexity of the computerised decision-making process, and so will need to be altered through both the judiciary and legislature. A few options are suggested in the report.
It was suggested that robots be conferred with legal status. The EU Parliament Committee on Legal Affairs has previously suggested this to the EU Commission, and this may provide the basic framework for establishing liability. Alternatively, it was suggested that a mandatory insurance scheme and supplementary fund be put in place. This would help deal with driverless cars in particular. Indeed, some jurisdictions in the world (Victoria, Australia) already operate this type of scheme for cars with human drivers. Insurance was noted as key to addressing the problem of liability as it can provide a remedy regardless of whether there is any fault in the AI's decision. The final option, briefly mentioned, is that the owner of the AI causing harm could be held liable. This ties the risk of liability to the economic benefit received from owning an AI, but it contradicts traditional concepts of fault-based liability.
The report also emphasised the need to resolve these issues so there is a clear legal framework that allows AI adoption to be as unhindered as possible.
The full report is here.
Update: the EU Commission also published its own Communication on AI on the 25th April 2018, available here. A blog post on this will come soon.