Amazon awards grant to UI researchers to reduce discrimination in AI algorithms

A team of researchers from the University of Iowa has received $800,000 from Amazon and the National Science Foundation to limit the discriminatory effects of machine learning algorithms.

Larry Phan

University of Iowa researcher Tianbao Yang sits at his desk working on AI research Friday, April 8, 2022.


University of Iowa researchers examine the discriminatory qualities of artificial intelligence and machine learning models, which are likely to be unfair to race, gender or other characteristics based on data models.

A research team from the University of Iowa has been awarded an $800,000 grant jointly funded by the National Science Foundation and Amazon to reduce the possibility of discrimination through machine learning algorithms.

The three-year grant is split between UI and Louisiana State University.

According to Microsoft, machine learning models are files trained to recognize specific types of patterns.

Qihang Lin, an associate professor of user interface in the Department of Business Analytics and co-investigator of grants, said his team wanted to make machine learning models fairer without sacrificing an algorithm’s accuracy.

RELATED: UI professor uses machine learning to show relationship between body shape and income

“People these days in [the] academic domain scale, if you want to impose fairness in your machine learning results, you have to sacrifice accuracy,” Lin said. “We kind of agree with that, but we want to come up with an approach that [does] compromise more effectively.

Lin said the discrimination created by machine learning algorithms disproportionately predicted recidivism rates — the tendency of a convicted criminal to reoffend — for different social groups.

“For example, let’s say we look at American courts, they use software to predict how likely a convicted criminal is to re-offend and they realize that this software, this tool that they are using, is biased because they predicted a higher risk of recidivism for African Americans compared to their actual risk of recidivism,” Lin said.

Tianbao Yang, associate professor of user interface computing and principal researcher, said the team proposed a collaboration with Netflix to encourage fairness in the process of recommending shows or movies to users.

“Here we also want to be fair in terms of, for example, gender of users, race of users, we want to be fair,” Yang said. “We are also collaborating with them to use our developed solutions.”

Another example of the machine learning algorithm being unfair is determining which neighborhoods to allocate medical resources, Lin said.

RELATED: UI College of Engineering uses artificial intelligence to solve problems across campus

In this process, Lin said the “health” of a neighborhood is determined by looking at household spending on medical bills. “Healthy” neighborhoods receive more resources, creating a bias against lower-income neighborhoods that can spend less on medical resources, Lin said.

“There’s a bad cycle that kind of reinforces the knowledge that machines mistakenly have about the relationship between income, medical expenses in the home, and health,” Lin said.

Yao Yao, a third-year doctoral student at UI in the mathematics department, conducts various experiments for the research team.

She said the importance of the group’s goal is that they are looking for more than just reducing errors in the predictions of machine learning algorithms.

“Before, people only focused on how to minimize the error, but most of the time we know that machine learning, AI, will cause some discrimination,” Yao said. “So it’s very important because we focus on fairness.”

Sharon D. Cole