This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 3 minute read

The UK Centre for Data Ethics and Innovation hints at the future of data regulation: first stop, online targeting and algorithmic bias

Last week saw Alan Turing announced as the new face of the UK's £50 note. As well as helping to crack the Enigma code, he is credited with being the father of computer science and artificial intelligence (AI). 

It is apposite then that, on the same day, the UK's Financial Conduct Authority (FCA) announced that it was collaborating with the Alan Turing Institute on the use of AI in the financial sector and, later the same week, the UK Centre for Data Ethics and Innovation (CDEI) published its interim reports, ahead of its final recommendations in December.

The FCA and CDEI reviews are just part of the regulatory landscape. The UK Government itself is grappling with concepts such as transparency, accountability and explainability, and has proposed a new regulator to deal with online harms. Meanwhile, the Information Commissioner and the Competition and Markets Authority are gathering evidence about different aspects of data regulation.

This is all happening while the capabilities of data-driven technologies are accelerating like never before.

The CDEI's interim reports

The CDEI was set up to help navigate emerging challenges around data ethics and inform the UK Government's thinking on the right governance regime. The interim reports – one on bias in algorithmic decision making, the other on online targeting – detail the CDEI's progress to date and its emerging insights. 

Bias in algorithmic decision making

The CDEI is focusing on four areas. Two relate to the public sector; the other two, which we will look at here, are financial services and recruitment. 

For financial services, the focus is on credit and insurance decisions taken about individual customers, particularly when using data from non-traditional sources such as social media and emerging machine-learning approaches. 

For recruitment, the focus is on the use of algorithms to (partially) automate hiring decisions. This could range from bulk screening of CVs and applications, to recommending who to invite to interview and analysing an interviewee's performance.

A key concern for the CDEI is the potential – as algorithms become increasingly complex – for existing biases to become entrenched or worsen. Whether through the data input, the design of the algorithm or the way outputs are acted on by humans, it is clear that the minimisation of bias in algorithms will require an understanding of some difficult concepts and the making of some difficult choices. 

Emerging points to watch:

  • Although the exact nature of oversight will vary according to the sector, effective human accountability will be key and is something boards will want to keep a close eye on.    
  • There is an understanding that the impression or existence of bias is often context specific and that perceptions of fairness are not always consistent. This means there will be significant value judgements to be made by decision makers.  
  • It is likely algorithms will need to be blind to protected characteristics while also checking for bias against those same characteristics. 
  • It will not be possible to remove all bias from a decision, but where the trade-off between accuracy and fairness will be set may well be one that businesses will have to consider carefully and be prepared to justify.    

Online targeting

The online targeting review is broad and encompasses any technology used to analyse information about people and then, automatically, customise their online experience. This includes targeted online advertising, recommendation engines and content ranking systems. 

The CDEI will focus on what are perceived to be high-risk sectors, including targeted news and information, media, user-generated content, advertising, retail and public services.

While the CDEI recognises that the underlying technology in online targeting is enormously beneficial in navigating the internet, it is clear there are concerns with the current model. A key one is that, although people can see the benefit in personalising their online experience, the more they understand about it how it works, the less likely they are to think the current practices are acceptable. 

There is a fear that the systems have become so pervasive that they no longer simply predict our existing beliefs and desires – they are starting to shape them. Among the greatest areas of concern are the potential to exploit people's vulnerabilities and the impact of targeting on trust in information/markets.

Although it appears the CDEI would not recommend new regulation without strong evidence and justification, there are signals they will be recommending exactly that in December. Likely areas include:

  • businesses having to be more open about the process they use to determine the acceptability of targeting algorithms;
  • targeting processes, including the inferences that can be made and used, and the types of targeting that can be undertaken;
  • stronger obligations on organisations to protect against vulnerability;
  • giving individuals more powers relating to, for example, consent, transparency and data portability; and 
  • enhancing competition, for example through the use of data trusts.

There are some interesting themes emerging from these interim reviews and some indications of how the future of data regulation might look. In the words of Alan Turing: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”.

Tags

europe, ai, social media, machine learning, automotive, insurtech, intellectual property, employment, cyber and data