Stories of Biased Algorithms at Techdirt.
from not-smart,-better,-or-less-biased.-just-faster. department
There’s a lot of human work to be done, but there never seem to be enough humans to do it. When things need to be processed en masse, we turn to hardware and software. It’s not better. It’s not smarter. It’s just faster.
We can’t expect humans to process massive amounts of data because they simply can’t do it well enough or fast enough. But they can write software that can do tasks like this, allowing humans to do the other things they do best… like passing judgment and dealing with other humans.
Unfortunately, even AI can become primarily human, not sentient,”turn everyone into paperclips” the way he is so often portrayed in science fiction. Instead, it becomes an involuntary conduit of human bias that can produce the same results as biased humans, but at a much faster rate while being whitewashed with the assumption that ones and zeros are incapable to be sectarian.
But that’s how AI works, even when deployed with the best of intentions. Unfortunately, taking naturally human jobs and subjecting them to automation tends to make societal problems worse than they already are. Take, for example, a pilot program that started in Pennsylvania before expanding to other states. Child protection officials decided that software should give serious thought to child safety. But when the data is entered, the usual garbage is taken out.
According to new research from a team at Carnegie Mellon University obtained exclusively by APAllegheny’s algorithm in its early years of operation showed a tendency to flag a disproportionate number of black children for “mandatory” neglect investigation, compared to white children.
Luckily, the humans were still involved, meaning not everything the AI spat out was treated as child protection gospel.
The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about a third of the time.
But if the balance shifted toward greater dependence on the algorithm, the results would be even worse.
If the tool had acted on its own to screen for a comparable rate of calls, it would have recommended that two-thirds of black children be investigated, compared to about half of all other reported children, according to another study published last month and co-authored by a researcher who audited the county’s algorithm.
There are other safety nets that minimize the potential harm from this tool, which the county relies on to manage thousands of negligence rulings a year. Workers are advised not to use algorithmic output alone to launch surveys. As noted above, workers are encouraged to express their disagreement with automated determinations. And that only served to deal with cases of potential neglect or substandard living conditions, rather than cases involving more direct harms like physical or sexual abuse.
Allegheny County is no anomaly. More and more localities are using algorithms to make child protection decisions. Oregon’s state tool is based on the one used in Pennsylvaniabut with some useful modifications.
Oregon’s Safety Screening Tool was inspired by the influential Allegheny Family Screening Tool, named after the county surrounding Pittsburgh, and aims to predict the risk children face of ending up in family d reception or to be the subject of an investigation in the future. It was first implemented in 2018. Social workers consult the numerical risk scores generated by the algorithm – the higher the number, the greater the risk – when deciding whether another social worker should go out to investigate the family.
But Oregon officials tweaked their original algorithm to rely only on internal child welfare data in calculating a family’s risk, and deliberately tried to address racial biases. in its design with an “equity correction”.
But Oregon officials decided to drop that following the AP investigation released in April (along with a nudge from Senator Ron Wyden).
The Oregon Department of Human Services announced to staff via email last month that after a “thorough analysis,” agency hotline workers would stop using the algorithm in late June to reduce disparities in families being investigated for child abuse and neglect by child protective services.
“We are committed to continual improvement in quality and fairness,” Lacey Andresen, the agency’s deputy director, said in the May 19 email.
There’s no evidence that Oregon’s tool resulted in disproportionate targeting of minorities, but the state obviously believes it’s better to anticipate the problem than dig a hole later. It seems, at least from this report, that the hugely important task of keeping children safe will always be done primarily by humans. And yes, humans are more prone to bias than software, but at least their bias isn’t hidden behind an impenetrable wall of code and is much less efficient than the slowest biased AI.
Filed Under: ai, Allegheny County, biased algorithms, child services, child protection, racism, risk scores, screening tool