How can we ensure algorithms prevent, rather than embed, bias?
By Jo Faragher on 27 August 2020
It sounds like something out of a science fiction movie, but responding to the A-level results crisis this week, UK Prime Minister Boris Johnson claimed the reason that around 40% of students had initially been marked down was a “mutant algorithm”. This was despite other ministers claiming the system was robust and reliable just days before.
The furore around the results has resurfaced an ongoing debate about how organisations use algorithms and artificial intelligence. There are many arguments as to why using an algorithm can actually reduce bias in processes such as recruitment by taking out the element of human judgement and our own unconscious prejudices. But at the same time, there is also evidence that AI models can actually embed biases – an algorithm used to predict reoffending rates in Florida, for example, was found to mislabel African-American defendants as high-risk of reoffending at nearly twice the rate it mislabelled white defendants.
Kim Nilsson, CEO of data science company Pivigo, believes the UK government’s A-level debacle was “sadly one which could have been predicted”. She says: “There are individuals in or near our government who have a great belief in the power of technology to help our lives and in time we will be able to rely more and more on tech and algorithms, but we are still some way from ‘road testing’ this form of technology, and it is still utterly dependent on the proper use and proper input of data to give reliable results.” Algorithms, she adds, are simply machines that do as they are told – and because so many decisions in building the algorithm are inherently human (what data to input, which models to use, the parameters to set), this is where biases can creep in.
The UK government launched an investigation last year to determine the levels of bias in algorithms that could affect people’s lives, but we are yet to see the results. Conducted by the Centre for Data Ethics and Innovation, it will focus on areas where AI has potential, including policing, recruitment, and financial services, but where it would also have a serious negative impact on lives if not implemented correctly. The Centre will explore the potential for algorithmic decision-making to counter social bias as well as the risks of algorithms giving the wrong results.
One of the key factors in ensuring the fairness of any algorithm, Nilsson adds, is the diversity of the team that is building it. She points to another example of an algorithm being used to predict someone’s likelihood of ending up in prison based on facial characteristics. “Because some groups are overrepresented in the data sample, and underrepresented in the individuals building the product, those groups are at risk of being negatively biased against,” she says. “There is a real risk to perpetuate stereotypes and racism.” A report by Google in 2016 found that women, Black people and Hispanic people were hugely under-represented in computer science courses, meaning the pipeline into data science and AI jobs is overwhelmingly white and male.
Nimmi Patel, policy manager for skills, talent and diversity at industry body TechUK, says it’s crucial to build AI “in the lens of the diverse world we want to live in”. “With AI, we have the opportunity to make something that can drastically improve the lives of many in society, but if we continue to develop it whilst failing to grasp the importance of both historical and contemporary bias, it could have dire consequences for generations to come.” She adds that systems will continue to be built and trained using the data we have, inevitably including historical race and gender bias, but we can take steps to mitigate this, even if not eradicate it completely.
“We will avoid this by doing two things – improving the diversity of teams developing emerging technologies and addressing bias in data that will train and feed algorithmic decision-making technologies,” says Patel. “Of course, improving diversity will in turn ensure less bias in development. Steps are being taken by industry to remove biased datasets as and when they are identified. For example, IBM recently launched a tool aimed at detecting AI bias, analysing how and why algorithms make decisions in real time. are working on automated bias-detection algorithms, which are trained to mimic human anti-bias processes we use when making decisions, to mitigate against our own inbuilt biases.”
Some technology companies have responded to this need by introducing roles or boards to have oversight of how algorithms are developed, including video interviewing and assessment technology company Hirevue. Last year it announced it had recruited an expert advisory board comprising “world-class experts in algorithmic bias, IO psychology [organisational psychology], and data privacy and security”, who would abide by a set of ethical principles in building AI. Leading tech companies including Google, Facebook and Apple jointly formed the Partnership on AI in 2016 to encourage research on the ethics of AI, including issues of bias.
Training in ethics
Nilsson adds that “very few data scientists willingly create biased or inaccurate models”, and that we need to invest more in educating new entrants to the industry in ways to mitigate these risks. “There is no defined training curriculum or professional accreditation for data scientists, and many data scientists have little training around ethics and bias,” she says. “It is like sending a junior doctor into an operating theatre with basic anatomy only, without explaining all the things that can go wrong and how to mitigate the risk of the patient dying.”
Patel concludes that organisations need to take a more holistic view of how these systems are developed, and whether the people and data involved in them are reflective of the diverse communities around them. “Without diverse perspectives round the table at every stage – from conception to design to creation – it can be dangerous,” she says. “There’s nothing wrong with the data, and there’s nothing wrong with the model, what’s wrong is that ingrained biases in society have led to unequal outcomes in the workplace, and that isn’t something you can fix with an algorithm. As the role of technology becomes increasingly influential in our lives, the more we need to ensure it works for all of us. If we don’t do this, tech will simply perpetuate the same biases and discriminatory attitudes that are present today.”
d&i Leaders is a global community of senior diversity, inclusion and HR focused professionals, looking to collaborate, network and accelerate their workplace inclusion strategy.