This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 3 minutes read

Bias in algorithmic decision-making: recommendations from the UK’s Centre for Data Ethics and Innovation

The Centre for Data Ethics and Innovation (CDEI) has published its final report of its review into bias in algorithmic decision-making. The report feels timely in 2020, a year in which equality and fairness has been at the forefront of the public consciousness.

Indeed, bias in algorithmic decision-making has come under increased public scrutiny since the CDEI published its interim report in 2019 (see our summary for further details). This is partly due to the COVID-19 pandemic: the infamous algorithm used to calculate A Level results in the UK after exams were cancelled was widely criticized for disadvantaging pupils from poorer backgrounds. The results calculated using the algorithm were eventually scrapped.

The CDEI notes the current socio-political climate highlights “the urgent need for the world to do better in using algorithms in the right way: to promote fairness, not undermine it”. Its report focuses on four areas: two relate to the public sector; and the other two, which we will look at in this blog, are financial services and recruitment.

How can bias be introduced into an algorithm?

Bias can be introduced at various points in the creation of an algorithm. The first step a data scientist takes when creating a machine learning model is deciding what they want to achieve, which could introduce bias from the scientist at the outset.

The next step is collecting the data that will be fed into the algorithm. The data could be:

  • unrepresentative (eg a facial recognition program could be fed more pictures of white people, resulting in higher error rates in identifying people from ethnic minorities); or
  • reflective of existing biases (eg internal recruiting tools could dismiss female candidates because of historic hiring decisions).

The scientist will then select the attributes they want the algorithm to consider, such as age, gender, or education level. This presents another opportunity for bias to be introduced into the algorithm if certain characteristics are included or excluded.

New input data and associated decisions can also be fed back into the original data set to update the model, potentially exacerbating biases that have been introduced earlier in the process.

What is the legal background?

There are a wide range of laws which would make biased algorithmic decision-making unlawful, including:

  • Equality Act 2010: sets out nine “protected characteristics” which it is unlawful to discriminate on the basis of (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation).
  • Human Rights Act: also prohibits discrimination against a wider range of characteristics in the context of the rights set out in the European Convention of Human Rights.
  • EU/UK data protection law: contains a range of provisions that relate to algorithmic decision-making.

What does the CDEI recommend?

The CDEI made the following observations about the recruitment and financial services sectors, respectively:

  • The use of algorithmic decision-making is growing rapidly in the recruitment sector. However, there is a lack of understanding on how to prevent the algorithms from entrenching biases (as described above).
  • The financial services sector is more mature and willing to test systems for bias. It also benefits from the active regulatory oversight of the Financial Conduct Authority and Bank of England. However, there are still risks in relation to under-represented groups in the financial system, and issues with the use of information in relation to credit scores.

The CDEI recommends:

  • The recruitment sector and employers should carry out Equality Impact Assessments to understand how models perform for candidates with different protected characteristics, including intersectional analyses for those with multiple protected characteristics.
  • Organisations in the financial sector should be able to explain the models used, particularly in making customer-facing decisions, so that discriminatory outcomes can be identified and mitigated.

What should organisations consider when using algorithmic decision-making?

The CDEI recommends the following:

  • Increased workforce diversity should be a priority. This applies not just to data science roles, but also to operational, management and oversight roles.
  • Measure outcomes in decision-making against relevant protected characteristics to detect biases. Organisations can then address any outcome differences that lack objective justification.
  • Use appropriate bias mitigation techniques. 
  • Be careful not to produce new forms of bias in mitigation efforts. Some bias mitigation techniques may risk introducing positive discrimination, which is illegal under the Equality Act.
  • Be aware of the differences in US and UK/EU equality law. Many of the algorithmic fairness tools currently in use have been developed under the US regulatory regime. These may not be fit for purpose in the UK as the relevant equality law is different.
  • Create organisational accountability. Set out clear ownership of the process, ensuring fair decisions are made and providing transparency about the use of algorithms.
  • Engage with regulators and industry bodies to set standards and norms.

Tags

ai, intellectual property, employment, europe, regulatory, fintech