Thursday, December 11, 2025
A new report provides a model and recommendations for agencies to consider.

Governments worldwide are exploring how artificial intelligence (AI) can transform public service delivery. But with great potential comes great responsibility. How can agencies harness AI to improve performance measurement evaluation while enhancing transparency and public trust?

The Center’s newest report, Responsible AI for Public Evaluation: Harnessing AI to Strengthen Public Sector Decisions, authored by Daniel F. Fonner of Southern Methodist University, tackles this critical question. It introduces a practical framework—Responsible AI for Evaluation (RAI-Ev)—as a model for how public administrators can integrate AI into program evaluation and performance auditing responsibly.

Why This Matters

Program evaluation and performance audits are essential for ensuring government programs achieve their goals effectively and efficiently. By applying AI thoughtfully in this domain, agencies can:

  • Gain deeper insights into program outcomes
  • Improve efficiency without sacrificing accountability
  • Enhance transparency and build public trust

What’s Inside the Report

  • A structured framework (RAI-Ev) as a model for using AI in evaluation processes, along with a case study of how the model works in practice
  • Actionable recommendations for embedding responsible AI practices into governance
  • Guidance on implementing AI for evaluation responsibly, in ways that include mitigating bias, safeguarding data privacy, and maintaining human oversight

Key Takeaways

Implemented effectively, AI can augment support human judgment in evaluations of government program results.  This report provides a roadmap for responsible adoption of AI by agencies, to ensure that technology used for this critical function of the public sector serves the public good.

Read the full report here: Responsible AI for Public Evaluation

Join the conversation: Follow us on LinkedIn, Instagram, and Threads for insights and updates.