How can we overcome biases in algorithms?
Today’s consumer encounters artificial intelligence (AI) technologies several times a day, perhaps without realizing it. Their food delivery service driver uses AI-enabled route planning, highly targeted ads are shown to them every time they browse the internet, and even their smart assistant’s responses have been enhanced with ‘IA.
Yet even as AI has become commonplace in society, there remain complex issues regarding AI’s ability to perpetuate societal biases around race, gender, age, and sexuality. There are countless examples of AI solutions reflecting the bias of the data that powers its systems.
Even big tech companies like Twitter aren’t immune to algorithmic bias. Users of the social media platform started realizing that the image cropping algorithm would automatically focus on white faces instead of black faces. Although the company said the AI was tested for bias before launch, it clearly didn’t go far enough.
AI-based facial recognition solutions have also come under intense criticism with a Project “Shades of Gender” finding that although facial recognition algorithms have high levels of classification accuracy, female, black, and 18-30 year old subjects have higher error rates than other groups.
The prevalence of AI bias is now well known to developers and businesses. Technology Gartner consulting firm predicts in 2018 that 85% of AI projects will produce erroneous results due to biases in the data, algorithms or teams managing them.
For privacy and ethics expert Ivana Bartoletti, the power of AI to exacerbate existing inequalities is vast and more attention needs to be paid to how AI biases can be combated.
“We have internalized the idea that there is nothing more objective, neutral, informative and effective than data. It is misleading. When an algorithm receives data, a decision has already been made. Someone has already decided that some data should be chosen and some should not. And if the data is, in fact, people, then some of us are singled out while others are silenced,” Bartoletti said in his book, An Artificial Revolution: On Power, Politics and AI.
Perhaps the biggest challenge for companies is to first identify how pervasive biases have already penetrated the data they hold and work to prevent these human-made biases from being introduced into data systems. AI.
Due to the complex nature of AI systems, it is particularly difficult to uncover potential biases that may arise during use. For example, if the datasets that feed an AI network already contain a bias related to human developers, the AI will build on that and show biased results.
Major tech organizations have released toolkits that provide developers with the ability to identify and remove any viruses found in machine learning models. The IBM Watson OpenScale service gives developers access to real-time bias detection and mitigation and helps explain how AI arrives at results, increasing trust and transparency.
Google also launched its What-If tool which offers detailed visualization of machine learning model behavior and then uses that data to test against fairness criteria to find and remove bias.
There is no doubt that companies will need to increase the time they spend on eliminating bias in AI systems, organizations that fail to deliver fair AI systems risk significant damage to their reputation and lose the trust of customers.