Can Algorithms Be Fair, Transparent, and Protect Children?

Can Algorithms Be Fair, Transparent, and Protect Children?

Mar 25, 2020
On the Evidence Guests

On this episode of On the Evidence, Emily Putnam-Hornstein of the University of Southern California, Rhema Vaithianathan of Auckland University of Technology, and Beth Weigensberg of Mathematica, discuss the use of predictive risk models to help child welfare agencies that are flooded with calls and short on resources identify and protect children at risk by separating “noise” from “signal.”

As technology improves organizations’ ability to collect, manage, and analyze data, it’s becoming easier to inform public policy decisions today in a range of areas, from health care to criminal justice, based on estimated risks in the future. On this episode of On the Evidence, three researchers discuss how they work with child welfare agencies in the United States to use algorithms—or, what they call predictive risk models—to inform decisions by case managers and their supervisors.

The guests for this episode are Rhema Vaithianathan, Emily Putnam-Hornstein, and Beth Weigensberg.

Vaithianathan is a professor of economics and director of the Centre for Social Data Analytics in the School of Social Sciences and Public Policy at Auckland University of Technology, New Zealand, and a professor of social data and analytics at the Institute for Social Science Research at the University of Queensland, Australia. Putnam-Hornstein is an associate professor of social work at the University of Southern California and the director of the Children’s Data Network. Weigensberg is a senior researcher at Mathematica.

Vaithianathan and Putnam-Hornstein have already worked with Allegheny County, Pennsylvania, to implement a predictive risk model that uses hundreds of data elements to help the people screening calls about child abuse and neglect better assess the risk associated with each call. Now they are working with two more counties in Colorado to pilot a similar predictive risk model there. Last year, they initiated a partnership with Mathematica to replicate and scale-up their work by offering the same kind of assistance to states and counties around the country.

On the episode, we discuss how they work with child welfare agencies to address common concerns about using algorithms in public policy, such as making algorithms transparent, subjecting algorithm use to independent evaluation, and paying close attention to racial bias and other ethical issues. Play the full episode below.

Want to hear more episodes of On the Evidence? Visit our podcast landing page or subscribe for future episodes on Apple Podcasts or Spotify.

Show notes

Find more information about Mathematica’s partnership with the Centre for Social Data Analytics and the Children’s Data Network here.

Find the publications page for the Centre for Social Data Analytics, which includes research and other resources related to the Allegheny Family Screening Tool, here.

Find the results of an independent evaluation of the Allegheny County predictive risk model here.

Find The New York Times Magazine article about Allegheny County's use of algorithms in child welfare here.

About the Author

J.B. Wogan

J.B. Wogan

Senior Strategic Communications Specialist
View More by this Author