Editorial: Can Algorithms Help or Harm Child Protective Services?

An algorithm is a process that uses mathematics or computers to solve a problem in order to find a solution.

They are increasingly a part of our lives, as computers run everything around us. Algorithms are behind traffic lights, facial recognition on phones, and ads that appear on Facebook that are eerily accurate to what you just talked about with your best friend.

These are artificial intelligence and predictive analytics, marketing and management. They are an integral part of carrying out all kinds of work.

But are they an integral part of all work? Aren’t there a few things that don’t need to work by equation?

A recent Associated Press article looked at how Allegheny County uses algorithms in child protective services. It’s not the only place using computers to improve things like health care and protective services. Carnegie Mellon helped the state track covid-19. Harvard’s Teamcore group is looking at ways to use artificial intelligence for more public service goals, including service delivery and determining risk among homeless youth with social media.

But there are risks in removing humanity from human services.

The AP story used Carnegie Mellon research to highlight an Allegheny County algorithm scheme pointing to black children for mandatory neglect investigations over white children. Social workers disagreed with the algorithm a third of the time.

It’s not that social workers are perfect. They are not. They miss things, and they do that because they’re human beings and people make mistakes.

But algorithms operate in a realm of numbers and percentages. They recognize that things will be missed and others will be misidentified, but if there’s enough chance of accuracy, well, that’s okay.

When calculating the odds that someone is pregnant because they bought a stroller, there is no downside if the stroller turns out to be just a baby shower gift.

But if black families are mistakenly seen as more likely to neglect their children, it can have consequences.

For families, this could mean unnecessary stress for parents and children. It could also mean that the risks to white children are not considered. For agencies, this could mean an additional workload, as real people have to follow up on reported cases.

“Workers of any kind should not be asked to make, in any given year … 16,000 of these types of decisions with incredibly imperfect information,” said Erin Dalton, director of the Services Department Allegheny County Social Services. She called the Carnegie Mellon report “a hypothetical scenario so far removed from how this tool has been implemented to support our workforce.”

It’s just. But Carnegie Mellon — ranked No. 1 in the country by US News & World Report for artificial intelligence — knows a bit about algorithms and shouldn’t have its research dismissed as a hypothesis.

The thing is, an algorithm isn’t just an alternative to people — a way to avoid human error. They are only a step away. Algorithms are designed by people and execute their commands without passion. They may be faster and take some of the load off human workers, but they may still be subject to the assumptions and perceptions of their flesh-and-blood designers.

The question is a perennial problem in the field of protective services. In the name of privacy and child protection, child protection agencies are opaque. There’s no way to do the same type of independent verification of how the use of algorithms is performing as you would with something like state agency spending.

With the end result being so non-transparent, it is all the more important that the available data is taken seriously.

Sharon D. Cole