The UK Information Commissioner's Office and the Alan Turing Institute have recently published their final guidance to help businesses and organisations explain decisions made by artificial intelligence (AI) systems. This follows a public consultation that finished in January (see our previous blog post here). As we reported previously, the guidance aims to give businesses practical advice on how they should explain their use of AI to people affected.
From chatbots to analytics the use of AI by businesses is increasing. Many are predicting that the COVID pandemic will accelerate this through a combination of increasing pressures on costs, the need for better consumer understanding and finding new ways of working.
Under the EU General Data Protection Regulation, AI decision-making that affects people’s personal data must be clearly explained to those people. Aside from having this legal duty, businesses that are transparent about their use of AI might also increase levels of customer trust. In light of the complexities of AI getting the level of detail in an explanation right is difficult and businesses will find the practical advice in this guidance essential when making decisions about how to explain what they are doing with AI to the people affected.
The guidance is split into three parts:
- Part 1 - an introduction to the key concepts, which will be relevant to any employee involved with AI;
- Part 2 - detailed practical guidelines, which will be mainly relevant to technical teams, but also to Data Protection Officers and compliance teams;
- Part 3 - guidance on roles, policies, procedures and documentation, which will be relevant for management, as well as DPOs and technical teams.
What are some of the things to look out for?
- Thought you knew what an explanation was? Think again. There are six main types of explanation including rationale (the ‘why’), responsibility (‘who can I complain to’) and data (the ‘what and how’). On top of this you have to consider a combination of process-based explanations (reflecting good governance) and outcome-based explanations (explaining the specific results). Then layer on the context (such as how significant the decision is and how quickly it needs to be made) to refine how you deliver the explanation.
- The guidance is detailed and complex in places, but the four key principles are simple and familiar: i) be transparent; ii) be accountable; iii) consider the context you are operating in; and iv) reflect on the impact of the AI you are using.
- The detailed practical guidelines in Part 2 make it clear that good explanations are built throughout the life cycle of the AI development and are not considered something lawyers can draft once presented with a completed system.
Things businesses should be thinking about now
- Do you have the right skills at the table? Good explanations are going to require a combination of skills at every part of the decision-making pipeline, from conception to design to implementation. Understanding the technology as well as your business and the audience will be key. This guidance offers a good opportunity to reflect on your governance processes.
- Are you asking the right questions when buying in AI? Due diligence should be carefully managed to ensure that you understand both the development of the product as well as how the explanation has been arrived at to mitigate inherited risks on acquisition or when using a third-party developer.
- Are your decision making records sufficiently robust? If important decisions are made using AI it is inevitable that some will face complaints and potential regulatory scrutiny. Navigating this technically complex landscape is tricky but good documentation that shows thoughtful decision making processes will be essential to managing issues that arise.
For more information on the legal issues surrounding AI, click here.