AI Social Impact #3: The Bias Edition
For this edition, we focus on how the use of AI to make decisions that involve humans chips away at the American Dream. We’ll talk about biases and how they are very much present despite the notion that artificial intelligence are incapable of prejudice.
Part of the American Dream is the belief that anyone can pursue a better life for themselves regardless of their background — a sentiment proclaimed as an inalienable right in the Declaration of Independence. And while we try to uphold this ideal (for all people), we struggle to avoid our natural tendency toward bias (favoritism or discrimination). To some, AI promises a world where decisions are made objectively, without bias. AI solutions are quickly being deployed in a wide range of social contexts, ranging from the courts, to employment, to insurance policies, and even driving.
Unfortunately, AI models are not free of bias, because bias is transferred from the data used to train them. This is how Google image classification mistook a group of black people as gorillas because the AI researchers didn’t include images of African Americans. More recently, Joy Buolamwini demonstrated that the gender of black women are identified correctly just 65% of the time versus 99% for white men. In another study, Rachael Tatman, showed how automatic speech recognition (ASR) systems don’t recognize women’s voices as well as men’s. A similar issue affects people with disabilities, arguably a population that could benefit greatly from ASR. With cars becoming voice-activated, there is a danger that only white males will be able to enjoy these luxuries due to the biases in the models.
The AI-is-objective trope is particularly dangerous, since 1) it’s easy to think there are no biases in a software-based decision making system, and 2) AI systems have access to unprecedented quantities of personal data, from gender, race, education, income, employment, politics, etc. Google famously showed different search results for black names than white names, which perpetuates racial stereotypes. Courts are already using AI to decide prison sentences, despite many flaws documented in this approach. One significant problem is that systematic bias ingrained in past sentences are transferred to new sentences. Instead of the crime driving the sentence, race drives the sentence because the data show that black people have longer prison sentences. What’s implied is that you as an individual no longer matter — instead your demographics define you.
Employment decisions can suffer from the same types of bias. The sheer amount of applications that pass through a desk is so numerous that over 45 percent of job applicants never hear or get a call from a company; for this reason, companies have tried to use AI to simplify their talent acquisition and make it easier for people to get new jobs. Of course, the data feeding these systems are susceptible to the same ingrained structural biases in other parts of society. This may help explain why some recruiters suggest inserting words like “Oxford” or “Cambridge” into a resume in invisible (white) text to game the AI.
The idea that AI can predict our potential for crime, success, and love is uncannily predicted in Philip K. Dick’s Minority Report. Before we reach that dystopian version of reality, it pays to address the bias in AI that is steadily eroding the American Dream. And it requires more commitment and action than Google, who addressed the gorilla debacle by simply removing gorilla and chimpanzee from the suggested image labels. Thankfully, many organizations, such as the Algorithmic Justice League and Data & Society, and conferences like Ethics in NLP and FAT*, are raising awareness of the ethical implications of AI.
At Pez.AI, we teach everyone how to identify bias. We also review our datasets to ensure we have acceptable levels of representation across different demographics where appropriate.