This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 3 minutes read

Decisions taken by AI: what should you tell people?

The UK Information Commissioner’s Office and the Alan Turing Institute have issued an interim report on how businesses should explain AI decisions to those affected. The ICO and ATI have been consulting with the public and industry with a view to drafting practical guidance for businesses on how to explain their AI decisions (‘Project ExplAIn’).

Under the GDPR people have a right to an explanation of certain solely automated decisions that affect them.

The interim report gives a welcome indication of what matters to the public and to the organisations who must explain AI decisions to them. We look at some of the key issues below.

1. Context matters

The interim report says that the content of AI explanations will depend on context - specifically the use case and the user.

If a decision can be challenged or the person can receive feedback, explanations are more important. For example, in recruitment or criminal justice, having AI decisions explained will be a priority for the job applicant or defendant. But in healthcare settings, patients will be more concerned with getting a quick and accurate diagnosis than an explanation of how the AI technology reached its conclusion.

How the explanation is pitched will also be tied to the recipient’s level of expertise in the relevant area. In healthcare, explanations may be necessarily very technical – so the best person to receive the explanation might be the healthcare professional rather than the patient. Equally, a recipient might not be in a suitable position to receive or understand an explanation; they might then choose to designate an agent to receive the explanation on their behalf.

The interim report suggests several contextual factors, including:

  • Urgency of the decision
  • Impact of the decision
  • Ability to change factors or influence the decision
  • Scope of bias
  • Scope for interpretation in the decision making process
  • Type of data used

A ‘one size fits all’ approach is unlikely to be successful in delivering appropriate explanations. The report suggests a hierarchy of explanations might work, allowing individuals to choose the amount of detail most relevant to them. This would align with the GDPR’s emphasis on giving people ‘meaningful information’.

2. Increased public education on AI will be important

The report also reveals that there’s an appetite for increased public education on AI in general. There are some concerns about causing confusion by publishing too much information, but the hope is that educating the public will tackle misconceptions about AI and how it works.

Businesses should be responsible for educating their staff internally. However, there’s little suggestion of who should be responsible for educating the general public about AI and, importantly, who would bear the cost.

3. Challenges – cost and commercial risk

The report concludes that the major challenges to explaining AI decisions relate to cost, rather than technical feasibility. Another challenge is how to pitch explanations at an appropriate level of detail – although the hierarchy approach might help here. Industry bodies have mentioned the potential risks of including too much detail in explanations. Examples include revealing commercially sensitive information and potentially infringing third-party IP rights. A data protection risk also arises where a full explanation would include third-party personal data. More cynically, revealing too much about the process might lead to gaming or other exploitation of the decision-making system.

Next steps 

A full draft of the Project ExplAIn report will be out for public consultation this summer, with guidance being published in the autumn. It will be interesting to see to what extent the views in the interim report are embedded into the final guidance.

The Project ExplAIn findings are likely to inform the ICO’s AI auditing framework, which is due to be finalised in 2020. The framework aims to help the ICO assess the data protection compliance of organisations using AI. We’ll also see further guidance for businesses on managing the data protection risk of AI.

If your business uses AI to make decisions, you should consider what types of people might be affected and how meaningful explanations could be tailored to them – but be mindful of the legal and commercial risks of explaining too much.

***

To read more on AI and how it intersects with regulation and the law, please visit our AI hub.

Tags

ai, europe, automotive, cryptocurrency, intellectual property