Wednesday, May 6, 2026
A new report explores how artificial intelligence can help child welfare workers across the country to better support their constituents — not by replacing human judgment, but by freeing up people to spend more time on delivering needed services and less time on repetitive processes.

Frontline child welfare workers carry consequential responsibilities. Every day, they make decisions that directly affect the safety, stability, and futures of children and families. Yet increasingly, these workers have to spend more time navigating administrative systems, managing documentation requirements, and searching for policy guidance — and less time doing the relational, judgment-driven work needed to achieve their mission.

That gap — between what the child welfare workforce seeks to accomplish and what existing systems actually enable them to do — reflects a central challenge addressed in a new report from the IBM Center for The Business of Government in collaboration with the University of Michigan School of Social Work.

Using AI to Improve Child Welfare: Navigating Rules in Real Time by David R. Schwartz offers a timely and grounded examination of how artificial intelligence can be responsibly deployed in child welfare agencies. Drawing on insights from a national roundtable of state and local child welfare leaders, researchers, and practitioners, Schwartz explores how AI strategies — when implemented responsibly— can reduce administrative burden, improve access to policy and case information, better inform professional judgment, and increase accountability.

The report articulates what AI in child welfare should — and should not — do. The promise of AI in this space does not lie in automating decisions about child safety. Those decisions require human judgment, empathy, and accountability. Rather, the opportunity lies in removing the administrative weight that has made this work increasingly difficult. The AI tools described in the report focus on providing answers to policy questions in real time, synthesizing complex case histories, assisting with documentation, and supporting training and professional development — all while keeping humans in the loop. These approaches support the workforce, rather than supplanting it. The applications are not hypothetical; the report points to early, practical implementations already underway in multiple states.

The report also addresses the ethical complexities of deploying AI in high-stakes human services environments. Schwartz integrates historical perspective, applied research, and practitioner experience to develop a set of clear principles for responsible adoption:

  • relief rather than replacement of the frontline workforce,
  • transparency over opacity in how AI systems work and make recommendations,
  • low-risk use cases before high-stakes ones,
  • strong governance structures that can adapt as technology evolves,
  • local control of sensitive data to protect families and maintain public trust, and
  • collaboration across agencies and disciplines to ensure implementation is informed and supports all stakeholders.

This report is the latest in a growing body of work from the IBM Center examining how effective technology, data, and management practices can strengthen government performance. This work includes prior publications such as Responsible AI for Public Evaluation, AI in State Government,, GenAI and the Future of Government Work, and AI and the Modern Tax Agency reflecting the Center's sustained commitment to helping government navigate the opportunities and responsibilities of emerging technology.