Thursday, December 4, 2025
A recent roundtable discussed how responsible application of AI by state governments can help improve services for children and families

The IBM Center for The Business of Government, in collaboration with the University of Michigan School of Social Work, recently convened a roundtable discussion, “The Ethical and Transformative use of Artificial Intelligence in Child Welfare.” The event included public sector leaders, technologists, and academic experts from Alaska, Arizona, California, Connecticut, DC, Florida, Georgia, Illinois, Michigan, Minnesota, Mississippi, New Jersey, Texas, West Virginia, Wisconsin -- and Switzerland! -- to discuss the benefits and challenges of using AI technology in to improve child welfare services, and to explore pathways for implementation and collaboration.

Speakers emphasized that AI should be evaluated carefully. Dr. Joseph Ryan and Dr. Brian Perron, Professors and Co-Directors of the Child & Adolescent Data Lab at the University of Michigan, underscored the importance of ethical frameworks and problem-solving mindsets, noting that local AI models can better protect sensitive data. They advocated for AI to handle low-value, repetitive tasks—such as documentation and referrals—enabling social workers to focus on high-value, human-centered work. This approach aligns with the broader goal of using technology to augment, not replace, human knowledge.  The discussion also addresses the importance of decision-making consistency and how to improve stakeholders’ overall experience by using AI.

Practical examples showcased AI’s potential to streamline workflows and improve efficiency.  For example, one state developed an AI chatbot, which helps investigators securely locate and interpret child welfare policies without exposing personally identifiable information.  Another state is integrating AI into child welfare operations, including a contact-note tool that digitizes handwritten notes, and a potential use of AI to scan and read orders to automatically create referrals for clients.

Despite these promising developments, the discussion acknowledged barriers to adoption. Workforce readiness and trust remain challenges, with concerns voiced about the risk of AI replacing human decisionmaking. Speakers stressed the need for continuous training, clear communication, and governance frameworks that reinforce AI as a support tool. Privacy and confidentiality were top priorities given the sensitive nature of child welfare data, requiring robust validation systems and secure data handling. Organizational hurdles—such as siloed systems and limited training time—can also slow progress.

The overarching takeaway was clear: AI for child welfare must remain human-centered, transparent, and ethical. Successful integration depends on inclusive design, strong governance, and collaboration across agencies and states. One participant noted that by freeing social workers from paperwork and administrative burdens, AI can help them focus on what matters most—supporting families and building trust. Moving forward, participants found that States should start small with low-risk, high-value applications, prioritize workforce readiness, and share lessons learned to create safe, interoperable, and transformative AI tools for child welfare.

A full report on findings from the roundtable will be published in the near future.