Companies viewed as less legally liable for discrimination from algorithms, study

People are less outraged by gender discrimination caused by algorithms.

Algorithms are being used by businesses and governments to enhance decision-making for hiring, medical care, and parole. Algorithms have the potential to replace human biases in decision-making, yet they frequently reach discriminatory conclusions.

A new study by the American Psychological Association examined whether people are less outraged by algorithmic discrimination than by human discrimination. The research suggests that people are less morally outraged when gender discrimination occurs because of an algorithm rather than direct human involvement.

Eight studies test this algorithmic outrage deficit hypothesis in the context of gender discrimination in hiring practices across diverse participant groups. To explain their findings from eight trials with a combined total of more than 3,900 people from the United States, Canada, and Norway, scientists in the study created the term “algorithmic outrage deficit.”

When presented with various scenarios about gender discrimination in hiring decisions caused by algorithms and humans, participants were less morally outraged about those caused by algorithms. Participants also believed companies were less legally liable for discrimination due to an algorithm.

Lead researcher Yochanan Bigman, Ph.D., a postdoctoral research fellow at Yale University, said, “It’s concerning that companies could use algorithms to shield themselves from blame and public scrutiny over discriminatory practices. The findings could have broader implications and affect efforts to combat discrimination.”

“People see humans who discriminate as motivated by prejudice, such as racism or sexism, but they see algorithms that discriminate as motivated by data, so they are less morally outraged. Moral outrage is an important societal mechanism to motivate people to address injustices. If people are less morally outraged about discrimination, they might be less motivated to do something about it.”

Some of the experiments used a scenario based on a real-life example of alleged algorithm-based gender discrimination by Amazon that penalized female job applicants. Though the study concentrated on gender discrimination, comparable results were obtained when one of the eight trials was repeated to examine race and age prejudice.

Knowledge about artificial intelligence didn’t appear to make a difference. In one experiment with more than 150 tech workers in Norway, participants who reported more excellent knowledge about artificial intelligence were still less outraged by gender discrimination caused by algorithms.

The scientists discovered that as people learn more about a specific algorithm, it may change how they feel. In another study, participants expressed greater outrage when it was revealed that male programmers at a business with a history of sexism had developed a hiring algorithm that resulted in gender discrimination.

Bigman said, “Programmers should be aware of the possibility of unintended discrimination when designing new algorithms. Public education campaigns also could stress that discrimination caused by algorithms may be a result of existing inequities.”

Journal Reference:

  1. Yochanan E. Bigman, Desman Wilson et al. Algorithmic Discrimination Causes Less Moral Outrage than Human Discrimination. DOI: 10.1037/xge0001250

Latest Updates

Trending