How can organisations prevent discrimination?
Is AI sexist?
What if an artificial intelligence (AI)-run recruiting programme rejects a female candidate for a senior management position because her CV states that she graduated from an all-women’s college? This has happened. And it happened because the AI system had been trained to vet applicants based on a scoring system modelled on historically successful resumes submitted to the employer – most of which came from men.
While statistics vary widely as to the exact percentage of current AI usage in HR functions (from 17 to up to 88 per cent!), it is clear that increased remote working due to the Covid-19 pandemic has generated even greater impetus to shift to automated HR processes. Employers using AI technology services will need the proper checks or assurances from the vendors that there are no gender biases lurking in the software.
Authorities voice concern and provide guidelines on AI
Regulators and legislators around the world have started to acknowledge the risks of using AI.
The UK Information Commissioner’s Office (ICO) recently issued guidance on AI and data protection which emphasises a need for human oversight and auditing of AI systems when they are used to make important decisions. My colleagues David Mendel and Guy Huffen recently discussed ICO’s guidance on AI in the employment law context in their blog post ‘Human oversight, individual rights and AI systems in the workplace in the UK’.
In February, the European Commission published its White Paper on Artificial Intelligence – A European Approach to Excellence and Trust in which it proposed implementing mandatory legal requirements to, among other things, take reasonable measures aimed at ensuring that the use of AI does not lead to discrimination.
Specifically on gender discrimination by AI, the European Advisory Committee on Equal Opportunities for Women and Men published an ‘Opinion on Artificial Intelligence – opportunities and challenges for gender equality’ in March 2020. The Committee emphasised the importance of transparency in the use of data and the criteria used by AI in the recruitment process to prevent any gender biased decisions going unnoticed. This is particularly important given that the reasoning behind a decision by AI will not always be apparent due to the complexity of data-processing by algorithms.
Some jurisdictions have taken further steps to put in place enforceable regulations as part of an effort to increase transparency in AI and to ensure the accountability of those using such technology. The New York City Council is currently considering a local law which would, if enacted, prohibit the sale of AI technology unless it had been audited for bias and had passed anti-bias testing in the year before the sale. It would further require employers to disclose to candidates within 30 days of using AI technology for hiring purposes, when and how the AI system was used. In the state of Illinois, the Artificial Intelligence Video Interview Act has been effective since 1 January 2020. It requires employers who use AI to analyse candidate video interviews to, among other things, notify, inform and obtain consent from applicants to the use of this technology. The penalties for violation of these provisions are not material but there is potential for the size of the fines to increase given the rising regulatory scrutiny and attention.
The risks are real
Given the heightened social and regulatory focus on AI, the risk of companies being investigated and/or found liable for using discriminatory AI is real. According to Bloomberg, the US Equal Employment Opportunity Commission is reportedly investigating at least two cases involving algorithms that allegedly discriminated against certain groups of job applicants. Recent changes to discrimination legislation in Hong Kong provide for the award of damages for unintentional indirect gender discrimination, potentially exacerbating the risk for employers in Hong Kong where the use of AI has led to inadvertent gender discrimination.
Next steps
Against the backdrop of technical challenges and resulting legal risks, employers have been given the fairly opaque recommendation by authorities to take “reasonable measures” (European Commission) and to set up “appropriate safeguards and technical measures” (ICO) to prevent AI-bias. This is of little help to many employers who are still getting to grips with the technology and largely relying on third-party providers of such AI tech, meaning that they have no control over the software in question.
Employers who utilise third-party vendors’ and their AI technology may seek warranties from vendors, confirming that the vendor has ensured appropriate safeguards have been put in place – for example, that the data fed into or used by any technology is un-biased. However, realistically, many vendors will only be willing to contract on standard terms that do not offer these protections. Possible alternative options for employers utilising AI technology from vendors may be: (1) to require such vendors to sign up to the employer’s “anti-discrimination policy”, or (2) to complete a survey/questionnaire requesting confirmation that the vendor has anti-discrimination measures in places. These documents may not be contractually binding but they may give employers (and potentially regulators) some comfort that gender bias has been considered in the development and implementation of the AI technology.
In accordance with ICO guidelines, other measures employers may consider putting in place are as follows:
- Ensure diversity in the initial database – establish clear policies and good practices on AI development standards and controls.
- Review and challenge data output – establish testing or audit requirements for decisions made by the AI system. Ensure the outputs are fair and do not adversely and unfairly impact women or other groups. The AI system’s performance should be constantly monitored.
- Document all steps and decisions taken to manage discrimination risk – documentation may include communication between the employer and vendor explaining and enquiring about the technical approaches to ensure fairness in the quality of the data, as well as internal minutes and memos showing due consideration of these issues and compliance with internal policies. Warranties, signing up to anti-discrimination policies and confirmatory questionnaires as mentioned above may also form part of such prudent record-keeping.
- Lead with a diverse team – ensure senior managers developing recruitment policies are diverse and include key female members who influence how an AI application is developed and used. Having a diverse team will also help mitigate any unconscious biases held by the human supervisors operating the AI tool.