Algorithms Aren’t Perfect, But They’re Better Than People | American Institute of Enterprise

Algorithms are increasingly used by public bodies and private companies to analyze big data and make informed decisions. Their effects are perhaps most visible in baseball, where they have transformed defensive positioning, the way teams use pitchers, and radically changed the strategy behind the game. Used correctly, algorithms can be a boon to society. and limit our reliance on arbitrary, incomplete and sometimes inaccurate personal judgments. However, fearing the algorithms will reinforce racial bias, many social justice advocates are fighting against their use in areas such as policing and college admissions.

College admissions algorithms have been around for a long time. Prior to a 2004 Supreme Court decision, the University of Michigan had a 150-point measure where various factors had fixed weights. However, due to the continued lack of black students, algorithms that use SAT/ACT scores have been limited to a growing share of selective colleges. Indeed, the Bar is about to decision that the LSAT will no longer be required for admissions to law school. These changes have occurred even though these exams are consistent predictors of freshman grades, entry into STEM majors, and law school performance. For these reasons, MIT recently defended its reintroduction of SAT/ACT scores into its admissions process.

The algorithmic controversy is not limited to college campuses – algorithmic policing has also faced intense backlash. In 2019, PredPol algorithms were used by over fifty police departments to determine the most effective resource allocations. However, claims that they reinforce racist practices have led to pushback. In 2018, UCLA professor Jeffrey Brantingham was convicted of “retrenching[ing] and naturalize[ing] the criminalization of blackness in the United States. In 2020, a faculty colleague hosted a Dismantling Algorithmic Policing in Los Angeles forum, which singled out PredPol.

After Eric Garner’s murder, hundreds of members of the American Mathematics Society signed a letter demanding that mathematicians stop collaborating with police, including attempts to refine algorithms to eliminate bias. The letter declared, “Given the structural racism and brutality of American police, we do not believe that mathematicians should collaborate with police departments in this way. It’s just too easy to create a ‘scientific’ veneer for racism.”

Two other areas where the use of algorithms has been criticized for its potential to disproportionately impact black Americans have been in employment decisions and productive services for children. The hiring process is full of bias, especially the initial screening where companies screen candidates before the interview process. Many companies have traditionally relied on recommendations from trusted employees or friends; and often focused on a few general metrics, primarily where a candidate’s degree comes from. This clearly limits interviews for those with more limited personal networks or who graduated from less prestigious schools. When you add the traditional red flags – criminal records, absence from the job market and lack of professional qualifications – young black men are disproportionately eliminated.

Critics, however, have pointed out disparities that remain when algorithms replace these personal judgments. When using a supervised learning model, only 2% and 5% of test takers who passed the initial CV screening was black and Hispanic, respectively. Carefully constructed algorithms can mitigate traditional biases, unearthing predictors of candidate performance that humans might miss. MIT researchers designed an alternative algorithm, labeled “Exploration-oriented model”, which “strikes a better balance between hiring proven groups and taking a chance on candidates from less well-represented groups”. With this alternative algorithm, the shares of black and Hispanic candidates who passed the initial increased to 14% and 10%, respectively.

Additionally, LinkedIn and Career Builder have made adjustments rectify the bias implicit in their algorithms that disadvantaged candidates. As a result, the data strongly suggests that female, black, and Hispanic applicants face fewer employment headwinds from properly developed algorithms than from more personalized traditional screening procedures.

More controversial are the algorithms used by some child protection services. “Workers should not be asked to make, in any given year, 14, 15 or 16,000…decisions with incredibly imperfect information,” said Erin Dalton, director of the Allegheny Count Department of Social Services and pioneer in the implementation of the predictive child protection algorithm. He believes the algorithms “provide scientific verification of the personal biases of call center workers.”

These strong biases are quite obvious. For many years, agencies have, where possible, recommended that children not be separated from their mothers. More recently, protection agencies have moved aggressively relying on kinship relationships as much as possible. Naomi Schaefer Riley Remarks while foster care can be effective, it has been too often relied upon, placing many children unnecessarily in vulnerable situations.

Allegany’s algorithm was immediately critical when she suggested that 32.5% of black children found to be neglected should have a “compulsory” investigation, compared to 20.8% of white children. However, these results could reflect the results documented disparate incidence of abuse: a black rate of 14 per 1,000 compared to a white rate of 8 per 1,000. In addition, the mandatory investigation differential was the same as before the algorithm and social workers were able to reduce the disparity racial generated by the algorithm. So, despite fears, it appears that these algorithms have indeed improved child welfare assessments.

Racial disparities can be compounded by prejudice and structural barriers. However, rejecting an algorithm just because it checks for disparities is shortsighted. As hiring and protection service apps point out, algorithms can be more effective because they are transparent and can therefore be adjusted in response to specific shortcomings. Although the algorithms are not free from bias, there is no alternative.

Robert Cherry is a recently retired professor of economics from Brooklyn College and affiliated with the American Enterprise Institute.

Sharon D. Cole