Algorithms may be biased, could listeners help?

This week, the Department of Justice settled a lawsuit with Meta, Facebook’s parent company, over the use of algorithms the government has called discriminatory. This served housing ads to users, which in the company’s words “look alike”. You can probably see the problem.

Meta said it was going to change its targeted advertising tool, and subject it to what is called an algorithmic audit.

Sasha Costanza-Chock is a researcher at Algorithmic Justice League, who is coming co-author of an article on algorithmic audits. She says audits can be done within companies, by contractors, or by third parties like researchers and journalists. The following is an edited transcript of their conversation with Meghan McCarty Carino.

Sasha Costanza-Chock (courtesy Caydie McCumber).

Sasha Costanza-Chock: Many companies build these first-party teams internally because they would like to do the best job with their systems, and also, before harming people in the real world. First-party auditors have broad access to all data and systems used. So in theory, you know, they should be able to do a great job. But the problem is that if they find something wrong, they don’t release it to the public, and so we have no idea what they find, and whether the companies are taking action to correct the issues that they found it or not. Second-party auditors almost always sign non-disclosure agreements, which no longer let them, in any way, go public or reveal the issues they find. And so the only ones doing audits where they’re going to share that with the world would be third-party auditors, but third-party auditors have the least access to the system they’re investigating.

Meghan McCartyCarino: Now, another issue that you’ve identified is that there aren’t exactly standards in the methodology of these assessments. I mean, how much variation did you find?

Costanza-Chock: We found a very wide range of variations. Most auditors look at things like the accuracy or fairness of training and sample data. But only 60% of them say they examine the security and privacy implications of the algorithmic system. And only about half of them say they check whether companies have quality systems for reporting damage. So if an algorithmic decision actually harms someone, let’s say someone is unfairly denied the opportunity to rent a house because the algorithm that screens tenant applications is racist, which is a reality that is happening around the world right now. Well, right now, only half of auditors say they’re looking for a way for people who have been injured to report that injury to the company. This is something we think should be important to consider in any audit. But it doesn’t happen all the time.

McCarty Carino: Okay, to what extent does it appear that these audits are actually having the intended effect of improving transparency and accountability?

Costanza-Chock: It’s extremely hard to say right now, because of the lack of disclosure we talked about earlier. That could change. There are now growing calls for regulators to audit systems carefully. So, for example, the Federal Trade Commission can potentially assess algorithms to determine if they are discriminatory. And if they find out that an algorithmic system has been developed without consent, they can actually order the company to destroy the dataset on the algorithm. So there are cases where a government regulator has the power and the access, but we need a lot more.

McCarty Carino: I mean, looking at the news this week, we see that Microsoft has announced its own responsible AI standard, it says it will limit the functionality of its facial recognition technology. I mean, should those kind of decisions be in the hands of the companies themselves like this?

Costanza-Chock: We absolutely cannot rely on corporations to do the right thing. Even those who want to do the right thing. They are in a competitive environment. If they spend a lot of time, energy, and resources checking their systems and making sure they’re as harmless as possible, they’ll be at a competitive disadvantage. And so even the companies themselves are asking regulators to step in and set the rules of the game and the auditors themselves that we spoke to almost none of them were able to share with us the audit methods or results. But almost all said they thought it should be a legal requirement.

Sasha Costanza – Just Chock co-author of an article with AJL founder Joy Buolamwini and Deborah Raji, it’s called “Who Audits the Auditors?” It identifies several areas to improve auditing effectiveness, including better systems for reporting harm from real-world biased algorithms and collecting feedback from communities at risk of harm early in the process.

OR if visual learning is more your thing, the Algorithmic Justice League has produced a helpful companion video to the article, narrated by Joy, who was on this show last year to talk about biases in facial recognition software .

The Algorithmic Justice League’s video summary of “Who Audits Auditors?” narrated by Joy Buolamwini.

Speaking of which, we have more on Microsoft’s recent decision to restrict the use of some of its facial analysis tools, like the one who claimed identify a person’s emotional state by analyzing facial expressions, but has come under criticism for bias and inaccuracy.

I mean, if it was that easy to tell how someone feels just by looking at their face, the whole rom-com genre would cease to exist!

Sharon D. Cole