Teens help MIT teach loan algorithms not to be racist

Artificial intelligence software can be taught to mimic human decision-making. But that means he can be taught to imitate human biases, like racism or sexism. Two California high school students, in partnership with a computer scientist and a Massachusetts Institute of Technology undergraduate, say they have found a way to prevent this, at least in some cases.

Jashandeep brothers and Arashdeep Singh worked with an MIT researcher Amar Gupta and sophomore Ariba Khan to create DualFair, a new method of training artificial intelligence software that helps banks decide whether or not to approve mortgage applications.

The Singh brothers attend Floyd Buchanan High School in Clovis, California. Both are Sikhs, followers of a religion that originated in India around 600 years ago. They say they have been regularly subjected to ethnic and religious slurs. “We were really passionate about denying discrimination and helping other people facing discrimination,” said Jashandeep Singh.

Jashadeep Singh and Areshdeep Singh.

Gupta, who led the effort, said he typically teams up with MIT students on such projects, through a program called Hackers Heaven that targets ambitious undergraduates. But exceptional high school students who apply are sometimes welcome. When the Singh brothers asked to participate, Gupta decided to give them a chance.

“In general, we are very selective,” he said. “In the case of high school students, we are even more selective… It’s almost fate that this happens.”

AI programs learn to make decisions by studying datasets made up of millions of real-life examples. IA’s mortgage programs use datasets containing many past mortgage applications. By studying which applications have been approved and which have not, the software learns to recognize the characteristics of a reputable borrower.

But those millions of old mortgage applications were handled by humans, some of whom were biased against black lending or women or Hispanics. Adding their biased decisions to the training database could teach the AI ​​software to make similar decisions.

DualFair is not an artificial intelligence program. Instead, it is a way to prepare the training databases used by these programs. It uses a variety of techniques to analyze past home loan decisions collected by the federal government. It then tries to weed out data that could teach an AI to make unfair judgments not just about race, but also about gender and ethnicity.

For example, if an AI system in training indicates that a black applicant should be rejected for a loan, the DualFair method tests for bias by associating different attributes with the same applicant and retesting them; this could make the applicant white or male or Hispanic, without altering any other data in the application. If the AI ​​makes a loan when the same person is identified as, say, a Hispanic male, that’s evidence of AI bias. Whenever this happens, the particular candidate is deleted from the training database. Only approved or denied applications, regardless of race, gender, or ethnicity, are used for AI training.

DualFair also makes sure to include an equal percentage of accepted and rejected applications for each possible combination of race, ethnicity and gender.

When a mortgage AI was trained using DualFair and tested on real mortgage data from seven U.S. states, the system was less likely to reject applications from otherwise qualified borrowers based on their race, gender or ethnic origin.

The team’s research was published online in March by the peer-reviewed academic journal Machine learning and knowledge extraction. Researchers now hope to find out if DualFair can be used to eliminate bias in other types of AI programs, including software used by doctors. For example, Gupta oversaw a study that found that hospitalized black patients receive painkillers less frequently than white patients. A medical version of DualFair, if it works, could correct the imbalance and protect black patients from unnecessary suffering.

Meanwhile, the Singh brothers have been admitted to MIT for the fall semester. But they have not yet decided if they accept. Both said they had been accepted to several other colleges as well.


Hiawatha Bray can be contacted at [email protected] Follow him on Twitter @GlobeTechLab.

Sharon D. Cole