Artificial intelligence – and the risk of it generating potentially arbitrary and/or discriminatory results – captured public and press attention at the end of the summer, particularly after the UK government scrapped a controversial algorithm to set marks for exam students.

Everyday working life is also becoming increasingly data-driven, as businesses seek to harness new technologies to improve people management processes and enhance operational capabilities (e.g. through increased efficiencies, cost reductions and risk mitigation). The term ‘people analytics’ has for some time been an HR buzzword to describe the application of digital tools and algorithms to people-related data. To take one recent innovative example, the social-distancing wearable ‘Bump’ has been specifically designed to improve workplace safety and use anonymous employee data to inform effective social-distancing behaviour. We have also seen COVID-19 and people analytics overlapping in other ways, such as: employers seeking to monitor employees in their home offices; requests for employees to use contact tracing apps; and workplace temperature checks.

Against this backdrop, the UK Information Commissioner’s Office (ICO) has recently produced guidance on AI and data protection touching on, amongst other things, the importance of ensuring human oversight of automated decision making in circumstances where employers are making significant decisions using AI tools. We have set out some of the key principles from an employment law perspective below. For more on the ICO’s guidance outside of the employment law context, please read Rachel Annear’s recent blog post on ‘Designing or buying in AI? 5 things to minimise GDPR risk’. 

Since GDPR came into force, automated decision making that has a legal or similarly significant effect on individuals has needed to be subject to some form of human oversight. This requirement comes from the concern that AI systems may make decisions without proper checks and balances.

This prohibition will apply to HR decisions made using people analytics tools which have a significant impact on individuals, such as a hiring decision, if those decisions are ‘based solely’ on the automated processing (meaning there is no human involvement in the decision-making process). ‘Human involvement’ must be meaningful and be carried out by someone who has decision making authority. This is echoed in the ICO’s guidance, which emphasises that employers should ensure that people assigned to oversee AI systems remain engaged, critical and able to challenge a system’s outputs wherever appropriate. In other words, the ICO does not expect human involvement in decision making to be a ‘tick box’ exercise.

Whilst human oversight is one side of the equation, it is also important to ensure that robust processes are put in place when AI tools are first designed. The ICO recommends that all relevant parts of an organisation should work together during the design and build phase of an AI project to support meaningful human review from the outset (e.g. business owners, data scientists and those with oversight functions). Businesses should carefully consider the factors that they expect the AI system to take into account in decision making and, as such, which additional factors the subsequent human reviewers should focus on. For example, it should be decided up front that the AI system will consider quantitatively measurable properties like how many years of experience a job applicant has, while the human reviewer will qualitatively assess other aspects of an application (e.g. an applicant’s written communication). The ICO notes that it may therefore be helpful to consult and test options with human reviewers early on when designing an AI system. If buying in an AI system, then this may require even more thought during the tender process.

It is important to remember that, when using automated decision making in relation to employee data, normal data protection principles continue to apply in addition to the more specific requirements referred to above. The typical legal grounds for employers to be able to process employee data are that: (i) the processing is necessary for the performance of a contract to which the employee is party (usually the employment contract); or (ii) the processing is necessary for the purposes of the employer’s legitimate business interests (which will require the balancing of the legitimate interests of the employer and the interests and fundamental rights and freedoms of the data subject).


People analytics tools are offering businesses many exciting, and often innovative, opportunities to change the modern workplace, especially in the current climate of COVID-19 and the ‘return to work’. However, businesses should be careful to find the right mix of people and technology when it comes to making decisions which affect individuals. AI tools cannot replace HR departments or make final decisions on behalf of managers; human oversight should be meaningfully integrated into the design and function of AI systems to adequately protect individuals in accordance with legal requirements.