AI and fairness: Elke Rundensteiner and her students develop algorithms to correct for bias in automated rankings | News

When it comes to fairness, artificial intelligence (AI) is flawed.

Despite their apparent superpowers, the AI-based algorithms that drive important decision-making can carry or even amplify the biases of their human creators. These invisible biases can lead to unintended consequences when AI is used to combine the preferences of multiple decision makers, some of whom may be biased, to rank applicants for jobs, scholarships, loans, awards, or other accolades.

Elke Rundensteiner

But Elke Rundensteiner, a William Smith Dean professor in the computer science department and founding director of the data science program at WPI, and her students are developing a way to solve this problem with algorithms that help ensure fairness in aggregate rankings. that have a profound impact on people. manners. The work was supported by a grant of nearly $500,000 from the National Science Foundation.

“It is a difficult problem to incorporate the preferences of multiple decision makers, who may harbor biases, into a combined consensus ranking and also to ensure that this aggregated ranking fairly includes diverse individuals from underrepresented groups,” says Rundensteiner. “As AI plays a bigger role in society, further impacting our way of life, we need effective mechanisms to achieve both fairness and consensus in rankings.”

Rankings are used everywhere for life-altering decisions and can be created by combining the preferences of individual decision makers. Committee members interviewing job applicants, for example, could submit their preferred applicants to an AI-based program, which would then be used to produce an aggregate ranking of applicants.

Sharon D. Cole