"Behaviour is the mirror in which everyone shows their image," - an old white guy (Johann Wolfgang von Goethe)
IBM has just announced today that they've designed new software that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made. The aim of the tool is to strip out unconscious racial or sexist bias within organisations and to monitor whether decisions are being shaped by ingrained prejudice.
The news comes hot on the heels of Kay Firth-Butterfield, head of AI and machine learning for the World Economic Forum, speaking about how the diversity problem in the tech industry is having damaging implications for the future of artificial intelligence development this week. The tech industry has a diversity problem - this isn't news to anyone - but the industry bias is creating problems within AI algorithms.
From Google’s image-recognition software labelling a black man and his friend as ‘gorillas' to the image-recognition software connecting women to the kitchen; more and more issues are cropping up and the ethical debate around artificial intelligence is becoming ever-greater. Issues arise when machine learning methods are used to train systems on past human decisions which may reflect historic prejudice; the dominance of white men of a certain age has often been pointed to as as a root cause for bias creeping into the algorithms behind AI. Of course, this is a reason why diversity programmes and organisations like STEMettes are so crucial in encouraging the next generation of girls to take up Science, Technology, Engineering & Maths (STEM).
But can we wait around for the next generation to tackle this bias? Considering the speed that AI is developing, no not really. Of course, IBM is isn't the only company trying to tackle the issue. Google has recently launched a "what if" tool, Accenture announced a fairness tool a few months ago, Microsoft are working on a bias detection toolkit and Facebook is testing a tool to help it determine whether an algorithm is biased.
Perhaps that white guy was close to hitting the nail on the head and AI behaviour is the mirror in which everyone shows their image. If that's the case, it is a worrying one that needs to be taken seriously but, while it is encouraging to see more attempts to tackle the bias in the algorithms, we clearly still need to tackle issues in our society too. For now, instead of ending with a quote from an old white guy, I'll leave you with words from the wise Maya Angelou:
“It is time for parents to teach young people early on that in diversity there is beauty and there is strength.”
IBM is launching a tool which will analyse how and why algorithms make decisions in real time. The Fairness 360 Kit will also scan for signs of bias and recommend adjustments. There is increasing concern that algorithms used by both tech giants and other firms are not always fair in their decision-making. However, as they increasingly make automated decisions about a wide variety of issues such as policing, insurance and what information people see online, the implications of their recommendations become broader. Often algorithms operate within what is known as a "black box" - meaning their owners can't see how they are making decisions. The IBM cloud-based software will be open-source, and will work with a variety of commonly used frameworks for building algorithms.