How AI Algorithms Produce Biased Decisions The Badger Herald

Google, Amazon, schools, government, and many places around the world use some form of artificial intelligence (AI). As the algorithms and data science behind AI become more pervasive in society, some researchers are focusing their efforts on making these technologies fairer for everyone.

Eunsil Oh, assistant professor of sociology at the University of Wisconsin, is working on a collaborative project with researchers in the computer science department to study how bias can enter algorithms and perpetuate social inequalities.

“I thought fairness in algorithms was such an interesting topic because it kind of sets you up to try to solve these problems that might arise,” Oh said. “I wanted to use my expertise in gender and sociology as it relates to inequalities [and] it doesn’t just include gender. It also includes race. It also includes ethnicity, sexuality and ableism.

STEM students with disabilities face additional barriers to graduatingNote: This story uses person-centered language (“people with disabilities”) and identity-centered language (“people with disabilities”), according to mixed usage in various Lily…

Oh’s main focus in this project is the gender pay gap. The hiring process at large companies such as Google and Amazon uses algorithms for the first round of selection. When algorithms receive years of biased data, they end up making biased decisions, Oh said. This can contribute to perpetuating gender inequalities in employment, salary and productivity forecasts. Because women have been paid less than men for years, statistical discrimination can persist in data fed into AI systems.

Additionally, Oh and his team are conducting a meta-analysis of hiring practices. They create fake resumes where they control many variables, such as gender, race, and name. They submit these resumes to real job postings and observe which variables affect the likelihood of being hired.

At the same time, the project’s IT team created an algorithm that analyzes these CVs and decides whether or not to hire the candidate. The algorithm then assigns the candidate a starting salary. By doing this, Oh said the team is able to see how biased the algorithms can be and suggest ways to refine them to overcome those biases.

UW computer science professor Jerry Zhu is studying methods to ensure that AI does its intended job and doesn’t produce unexpected results. He said natural language systems, such as chatbots, can be vulnerable to display bias. These systems are designed to repeat what humans have said and done in the past. Since minorities may be underrepresented in the dataset, these systems may reproduce social inequalities.

“It’s potentially a way for the system to amplify those biases, especially if the system is used to make important decisions,” Zhu said.

AI has already disproportionately affected minorities on some occasions. Facebook had to apologize after software tricked users into continuing to watch ‘primate videos’ after watching a video featuring black men, according to NPR. In 2015Google had to remove the words gorilla, chimpanzee, chimpanzee and ape from its image recognition software after the software referred to black men as “gorillas”.

UW research labs struggle to sustainably manage wasteEpicenter of groundbreaking research, the University of Wisconsin is constantly producing new discoveries in many fields, from chemistry to plants Lily…

Additionally, Oh studies how the criminal justice system uses AI. She said bias could lead to unfair bail decisions or sentencing. However, each case is unique, making it hard to tell if the AI ​​was right or wrong.

Police departments also often use AI, which further contributes to existing systemic issues.

“I would say that the use of big data has increased in policing and it has reproduced existing inequalities rather than reducing them so far,” Oh said.

The AI ​​used in policing focuses on ZIP codes, Oh said, which can be problematic since police will parole neighborhoods that historically host more crime. This leads to more arrests in these neighborhoods. More arrests lead to more police on patrol, and the system begins to spiral out of control.

The AI ​​isn’t entirely bad and the biased algorithms aren’t intentional. Zhu said any respectable AI software company has no intention of producing biased AI, but biases can still seep through their data. Zhu suggested that these systems need some sort of safeguard, such as having a human component in the decision-making process or building a system that is more resistant to bias.

“You can compel him to be aware of certain biases, and you can ask him never to do something that is ethically wrong,” Zhu said.

Seminar on sustainable energy: using ammonia to abandon fossil fuelsThe White House’s goal of achieving net zero carbon emissions by 2050 will require big changes – at a Lily…

Zhu likened AI to any food product, and with any food product comes a nutrition label and an ingredient list. Zhu said future AI systems may be required to include a “nutrition label” in the future, which would help users know what they are using. AI is currently missing this tag.

The challenge is to identify how an AI system makes its decisions. Zhu called these systems a “black box” because there is no way to dissect and understand them at this time. Still, he thinks it’s important to continue this research to make AI fair for everyone.

“It’s extremely important because AI is like any other product, and you don’t want a defective product,” Zhu said.

These examples show why it will be important for the development of AI systems to include input from sociologists like Oh.

Oh considers herself a qualitative researcher, which means that her data and observations come from interactions with real people. Yet once the pandemic hit, she couldn’t gather observations through those interactions. She was attracted to this study by her colleague in the computer science department, Kangook Lee, who offered them to collaborate on a project based on social issues. In this way, Oh could bring his background in sociology to the emerging field of big data.

UHS offers free flu shots to students and staffThe start of the semester brings game days, late nights at the college library, and flu season. From September 13, University Lily…

Oh feels lucky to have her sociological training in this technical field. She hopes that studies like these can lead to greater collaboration between sociology and data science. She thinks people should continue to bring a broader philosophical platform to these data-driven projects to tackle these social issues.

“My role is really to give an overview of what it’s all about,” Oh said. “It’s about social inequalities reproducing in this new fusion platform and that’s inevitable for us.”

Sharon D. Cole