So far, people analytics has only supported recruiters in assessing and pre-selecting applicants. We may soon be a step ahead. Will algorithms completely replace HR managers when making hiring decisions in the near future? 

According to a recent Washington Post article [paywall], a new AI hiring system, called HireVue, is gaining ground with many employers in the US in particular. Candidates conduct their interview through the camera of their computer or mobile phone. The system analyses their facial movements, eye contact, word choice and speaking voice before ranking them against other candidates. 

According to the Washington Post, more than 100 companies now use the system and more than a million job seekers have been analysed. Meanwhile, universities offer courses for students to prepare for the HireVue interview. 

As in Europe, the use of algorithms and AI for hiring decisions in the US also raises eyebrows. While some say that people analytics tools are not sufficiently rooted in scientific fact and open the door to discrimination, others argue that computers are still more objective than human recruiters who suffer from conscious and unconscious bias. Another big criticism, particularly in Europe, is that people analytics interferes with employees’ privacy rights. Others claim that the unsuccessful candidate will not be informed of what they have done wrong in the opinion of the algorithm – but is that really much different from an interview with a human recruiter?

According to the Washington Post, HR managers in the US think that, in future, at least some employers may rely exclusively on machines and algorithms for recruitment decisions. Might that also be the case in Europe? Possibly not. According to Article 22(1) of the GDPR, candidates have the right not to be subject to a decision based solely on automated processing if this decision produces legal effects concerning them or similarly significantly affects them. 

While Article 22(2) of the GDPR provides for some limited exceptions, in most cases this will mean that the final decision will have to be taken by a human recruiter. 

In these circumstances, the recruiter will have to:

  • be empowered to make decisions;
  • have a sufficient data basis to make decisions;
  • have sufficient professional qualifications to make decisions;
  • be permitted to deviate from the automated pre-selection process;
  • be able to intervene at the right time, ie before the final decision is made; and
  • not limit themselves only to a random check or to filtering out implausible results.

Furthermore, this human intervention must be well documented. Otherwise, an employer who uses AI to make a hiring decision will not be able to prove that they have acted in accordance with GDPR, and that the final decision has been made by a human being.