The problem of bias in AI is not a problem of the AI design per se but of historical societal conditions of inequality and prejudice which characterises or is missing from the datasets that the AI system learns from.

Potential for bias in artificial intelligence
Humans make decisions and take actions underpinned by unconscious biases and emotional reflexes. Artificial intelligence algorithms make predictions based on the models we create and the data we input. Any dataset involving human decision-making will to some extent have inherent elements of human prejudice, misunderstanding and bias.
Mathematician Cathy O’Neil argues that the mathematical models within software applications that process data and power the ever increasing ‘data economy’ amplify human decision-making weaknesses and cause cumulative loops of injustice and inequality.
Zou and Schiebinger’s commentary for Nature journal identified numerous examples whereby sexist and racist outcomes can be produced from skewed training data:
“Biases in the data often reflect deep and hidden imbalances in institutional infrastructures and social power relations.”
For example, an AI system designed to identify candidates for an executive position might favour white males in its recommendations because historical data contains a higher frequency of this group compared to any other. The narrow AI agent does not have the reflexivity to consider why this might be the case or ways to identify and correct its own biases.
Bias also occurs not only due to unjust patterns in the training data, but also due to a lack of training data. A Georgia Institute of Technology study found object detection models, such as those used in self-driving cars, might be less accurate for people classified as dark-skinned compared to light-skinned people. There is also potential for AI systems to enable human bias in new ways — facial recognition technology has been shown to disproportionally incorrectly identify people with dark skin tones.
Nobel Prize winning psychologist Daniel Kahneman’s work on human decision-making extrapolates that human beings cannot overcome all forms of bias. Since we know that some degree of bias will be evident in any data created from human decisions, it is essential that we take a more considered or ‘slow’ approach to the design process of AI and its learning data to correct the parameters that lead to biased outcomes.
With the identification of bias in AI training data or outputs, changes to the algorithms could then potentially create an AI system with much less bias or irrationality compared to humans. This could turn into a societal benefit of AI in future applications.
