The UK Information Commissioner's Office has published draft guidance on the use of AI and is seeking feedback from business. The guidance aims to give businesses practical advice on how they should explain their use of AI to people affected – this might include where businesses use AI to make recruitment decisions or to analyse customer behaviour.
Under the EU General Data Protection Regulation, AI decision-making that affects people’s personal data must be clearly explained to those people. Aside from having this legal duty, businesses that are transparent about their use of AI might also increase levels of customer trust. However, some businesses might be concerned about revealing commercially-sensitive information. The ICO’s draft guidance, which was drafted together with the Alan Turing Institute, gives practical examples of how to explain AI decisions.
The report identifies six main types of explanation, and says that the most appropriate explanation(s) will depend on the context of a particular AI decision:
- Rationale explanation: the reasons for a decision, expressed clearly;
- Responsibility explanation: who is involved with the AI system, and who to contact for a human review of a decision;
- Data explanation: what data has been used in a decision and how, and what data has been used to train the AI and how;
- Fairness explanation: steps taken to ensure that an AI decision is fair;
- Safety and performance explanation: steps taken to ensure an AI decision is accurate, reliable, and secure;
- Impact explanation: the impact that using the AI system might have on an individual and on wider society.
The report says that the appropriate type of explanation might depend on a business’s sector – for example, AI applications that are employed in safety-critical sectors like medicine will need to provide a ‘safety and performance’ explanation. And the level of detail required in an explanation might depend on the likely impact of an AI decision: for example, an AI system that triages customer service complaints for a retailer will require much less detail than one that triages patients in a hospital critical care unit.
The report suggests that any business that is developing or procuring an AI system should consider embedding these principles from the start – a process it calls ‘explanation-by-design’.
The report is helpfully split into three parts:
- an introduction to the issues, which will be relevant to any employee involved with AI;
- detailed practical guidelines, which will be mainly relevant to technical teams, but also to Data Protection Officers and compliance teams;
- guidance on roles, policies and procedures, which will be relevant for management, as well as DPOs and technical teams.
The deadline for submitting feedbackis 24 January 2020.
For more information on the legal issues surrounding AI, click here.