WILL THE ALGORITHM LET YOU KEEP YOUR CHILD?

Social workers are relying on AI algorithms to help determine families that should be examined for possible interventions.

Not families that have done anything allegedly illegal, as reported by anyone.

Just families whose data, culled by government agencies, trigger an algorithmic probability warning.

The Associated Press recently detailed the story, and concerns surrounding the program and practice.  

“A lot of people don’t know that it’s even being used,” Robin Frank, a Pittsburgh Attorney who has advocated for families facing being separated, revealed. “Families should have the right to have all of the information in their file.”

Frank might’ve noted that the Fourth Amendment to the Constitution goes just a little further than that.  It says: 

“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

But despite the law, governments are scraping every last shred of data they can from citizens at a Federal, State and even local level, and sharing it all in a nexus of control and surveillance, whether they admit it all, or not.

In the case of family welfare “pre-crime” prevention, child welfare agencies across the country are using or considering using tools like the one employed in Allegheny County, Pennsylvania, according to the AP.

The wire service reported finding a number of concerns about the technology, including typical “liberal” questions about its reliability and the potential to exacerbate racial disparities in the system. 

If there has to be “pre-crime” interventions, at least let there be equity (yes, that’s sarcasm). 

The AP found that Independent researchers who obtained data from the PA county using the system discovered that roughly one-third of the time, social workers disagreed with the risk rankings generated by the algorithm.

The disturbing story can be read here.

Skip to content