Thursday, April 24, 2025
My interviews with Professor Sukumar Ganapati of Florida International University and Assistant Professor Wendy Chen of Texas Tech University provided profound insights into two critical areas: public trust in government and AI governance

As the host of The Business of Government Hour, I had the privilege of attending the 2025 American Society for Public Administration (ASPA) Conference in Washington, D.C., themed “Not Robots Yet: Keeping Public Servants in Public Service.” This theme underscored the delicate balance between embracing emerging technologies like artificial intelligence (AI) and preserving the human essence of public administration.

My interviews with Professor Sukumar Ganapati of Florida International University and Assistant Professor Wendy Chen of Texas Tech University provided profound insights into two critical areas: public trust in government and AI governance.

Their perspectives, while distinct, intertwined in meaningful ways, offering complementary visions for how public administration can navigate technological transformation while fostering trust and accountability. I reflect on the key themes and insights from each conversation and explore how they connect to shape a forward-looking agenda for public service.

Reimagining Public Trust in a Technological Age

Professor Ganapati’s discussion centered on two pivotal contributions: his National Science Foundation project, the Advanced Cyberinfrastructure Training for Policy Informatics (ACIT-PI), and his ASPA presentation on the evolution of trust in government. These topics, though seemingly disparate, converged on the idea that technology, when thoughtfully applied, can enhance public administration’s effectiveness and rebuild public trust.

Advanced Cyberinfrastructure Training for Policy Informatics (ACIT-PI) Project: Empowering Public Policy with Data Science

Ganapati’s ACIT-PI project addresses a critical gap in public policy education by integrating data science and machine learning into the curriculum. Traditionally, public policy training has relied on statistical methods, but the rise of data-driven technologies demands new skills. The project’s innovative use of micro-credentials—short, targeted courses of 15 hours or less—offers a practical solution to this challenge. By training doctoral students, assistant professors, and associate professors, ACIT-PI ensures that the next generation of public policy educators is equipped to teach data science methods. The overwhelming demand for these courses, with over 1,200 applications for just 25 seats, underscores the desire for such skills in the field.

This initiative is significant because it transforms how public policy problems are analyzed and solved. By applying data science to local government issues, such as optimizing service delivery, the project bridges the gap between theoretical innovation and practical application.

Ganapati emphasized that cyber infrastructure methods, which encompass not just hardware and software but also human and institutional systems, are about making data actionable for human decision-making.

This focus on practical, data-driven solutions sets the stage for his broader discussion on trust, as efficient service delivery is a cornerstone of public confidence in government.

Trust in Government: A Disaggregated Perspective

Ganapati’s ASPA presentation challenged conventional narratives about declining public trust in government, which often rely on aggregated metrics like Pew Charitable Trust surveys showing trust levels dropping from the 1960s to the mid-20s today. He argued that this “30,000-foot view” oversimplifies a complex reality.

By adopting a disaggregated lens, Ganapati revealed that trust varies significantly between the political and managerial dimensions of government.

While trust in elected officials may be low, trust in the managerial or professional side—where public servants deliver tangible services like passport processing or pothole repairs—remains relative high.

This distinction is critical because it highlights the role of operational efficiency in fostering trust. Technologies like 311 systems, which streamline non-emergency service requests, exemplify how data-driven tools can enhance service delivery and, consequently, public confidence. Ganapati cited the global diffusion of 311 systems and participatory budgeting platforms, such as Madrid’s “Decide Madrid”, as evidence of technology’s potential to make government more responsive and transparent. However, he cautioned that technologies like AI and blockchain must be implemented with transparency to avoid misuse or misunderstanding, which could erode trust. For instance, blockchain’s decentralized ledger offers inherent transparency, but AI’s opaque decision-making processes require robust guardrails to ensure accountability.

Ganapati’s practical recommendations for rebuilding trust centered on responsible transparency—a two-way street where governments share relevant information without compromising security or overwhelming citizens. He also emphasized the role of academic research in bridging the gap between public perception and government reality, advocating for collaborative dialogues among academics, practitioners, and citizens.

His call for a critical approach to trust, one that embraces constructive disagreement and multiple perspectives, resonated as a pragmatic yet hopeful vision for public administration.

Crafting a Balanced AI Governance Ecosystem

Assistant Professor Wendy Chen’s presentation on AI governance offered a global perspective on how governments can navigate the rapid advancement of AI technologies. Her research, grounded in a sentiment analysis of international AI policy documents, highlighted the need for a balanced “AI policy ecosystem” that fosters innovation while ensuring ethical oversight. Chen’s insights complemented Ganapati’s focus on trust by addressing the governance frameworks necessary to deploy AI responsibly in public administration.

The Urgency of AI Governance

Chen began by framing AI governance as a critical issue in the context of a global “AI boom,” where technologies like generative AI are advancing faster than regulatory frameworks.

She likened AI policy to the rules of a football game, essential for organizing innovation without stifling creativity.

Her analysis revealed stark differences in national approaches: some countries prioritize AI investment and talent development, while others focus on ethical concerns, often at the expense of a holistic strategy. This imbalance, Chen argued, risks either unchecked technological growth or stifled innovation.

In public administration, AI governance is vital because it sets direction for both public and private sectors. Chen’s upcoming book, Innovation and Entrepreneurship in the Public Sector, underscores the role of “public entrepreneurs”— agency leaders who drive soft innovations like new policies. These actors must approach AI governance with flexibility to avoid unintended consequences, such as overregulation that hampers progress or underregulation that compromises accountability.

Principles of Effective AI Governance

Chen outlined several principles for effective AI governance, which differ markedly from traditional regulatory approaches:

  1. Stakeholder Inclusion: AI governance requires collaborative, multi-stakeholder involvement, ensuring that all voices—government, industry, academia, and citizens—are heard.
  2. Adaptability: Policies must be flexible, proportional to the risks posed by different AI systems, and evolve with technological advancements.
  3. Transparency: AI systems must be transparent in their operations, particularly when they impact individuals’ lives, such as in hiring or law enforcement.
  4. Human Oversight: Ongoing human monitoring is essential to maintain accountability and allow for course corrections.
  5. International Collaboration: AI’s global nature would find value in pursuing shared principles and cross-border partnerships, as exemplified by frameworks like the OECD’s AI principles.

These principles contrast with traditional regulations, which are often rigid and sector-specific. Chen’s ecosystem approach advocates for a dynamic, inclusive framework that balances innovation with ethical considerations, ensuring AI serves public value.

Government’s Role in Balancing Innovation and Regulation

Chen emphasized that governments must play multiple roles in AI governance: as innovation enablers (through R&D investments and public-private partnerships), guides (providing clear regulatory expectations), and promoters of ethical frameworks.

She advocated for risk-based regulation—voluntary standards for low-risk scenarios, sector-specific guidelines for medium-risk cases, and strict rules for high-risk applications.

This nuanced approach ensures that AI advances public administration without compromising trust or safety.

Connecting and Complementing Perspectives

Ganapati and Chen’s insights connect through their shared emphasis on leveraging technology to enhance public administration while prioritizing trust and accountability. Ganapati’s focus on operational efficiency and disaggregated trust aligns with Chen’s call for transparent, stakeholder-driven AI governance. Both recognize that technology—whether 311 systems, data science, or AI—can improve service delivery and public value, but only if implemented with transparency and human oversight. For example, Ganapati’s 311 systems and Chen’s AI-driven decision-making tools both aim to make government more responsive, but they require clear processes to maintain public confidence.

Their perspectives also complement each other in addressing the risks of technology. Ganapati warned of AI’s potential to create opaque decisions, while Chen highlighted the dangers of imbalanced policies that either overregulate or underregulate AI. Together, they advocate for a balanced approach: Ganapati through responsible transparency and critical trust dialogues, and Chen through a holistic AI policy ecosystem. Both emphasized the need for collaboration to ensure technology serves the public good.

Conclusion

The interviews with Sukumar Ganapati and Wendy Chen at ASPA 2025 illuminated a shared vision for public administration in an era of technological transformation. Ganapati’s work on data-driven education and disaggregated trust underscores the importance of efficient, transparent service delivery in rebuilding public confidence. Chen’s AI governance framework offers a roadmap for harnessing AI’s potential while mitigating risks through inclusive, adaptable policies. Their insights frame a perspective that underscores keeping public servants at the heart of public service as stewards of trust, innovation, and accountability.

As public administration navigates this complex landscape, their research lights the way toward a future where technology empowers, rather than overshadows, the human element of governance.