Building Trusted Artificial Intelligence – Lessons from Australia
Artificial intelligence (AI) has proliferated across all sectors of society. National governments have created AI-related strategies, frameworks, and guidelines on the ethical use of AI. Yet while people have faith in AI to produce good and reliable outcomes, they have questions about the safety and security of AI systems. Specifically, this concerns public trust in AI itself, and trust in government to develop mechanisms to successfully deploy and manage such a powerful technology. These issues cover trust in AI in the context of design, development, deployment, and evaluation of public services and public policy.
Information technology leaders in government have the power to play a significant role in future directions for AI that build trust. AI has the vast potential to be a change for good. It can change how governments lead, make decisions, and serve nations for future success. The governance and applications of AI is an important conversation government and industry must all have to help address the needs, security, and progress of delivering services that benefit citizens and industry.
Pathways to Trusted Progress with Artificial Intelligence, a new report released today by the IBM Center for The Business of Government, distills perspectives from an expert roundtable of leaders in Australia. The report, authored by Gregory Dawson with Arizona State University and Kevin Desouza with the Queensland University of Technology in Australia, discusses major questions to help inform government decision making and design principles, including:
- How can governance be an enabler of action and trust, rather than an inhibitor of progress?
- How can AI help to navigate the nuances of meeting government and citizen needs?
- What are best practice insights from other governments? How are these outcomes measured?
Insights from experts as reported in this report focus on how governments need to develop and communicate a framework for the public to understand why AI is being used, what has been done to ensure that the AI is fair, transparent, and accurate, what experiments were done to ensure that the output is reliable, and how public value from AI is being measured and created. By addressing the growth and management of AI, and the governance of data aligned to AI strategies, government can take full advantage of the power of AI.
This framework builds on five themes:
- Government is in the business of providing services, and AI is simply a tool to facilitate that.
- Government is held to a higher standard of performance regarding AI versus private companies, making explainability and transparency of utmost importance.
- Government needs to work holistically in terms of defining AI standard practices, operating models, etc.
- Adequate governance is necessary not only for AI technology, but also for the people who build AI systems and the processes used to build them.
- There is a need to distinguish between different types of AI (fully autonomous, semiautonomous, and augmented) in establishing guidelines and approaches.
These themes, coupled with background work done by the authors, gave rise to several recommendations, including promoting AI-human collaboration, focusing on justifiability, insisting on explainability, and building in contestability.
The authors explore case studies to illustrate these insights, addressing the potential that AI has to transform how government agencies interact with citizens, along with risks that can arise when AI is left unchecked.
This report builds on considerable research that our Center has led about how Australia and other governments can implement AI in ways that build trust, including Charting the Course to Tomorrow’s Trusted Digital Services, Artificial Intelligence in the Public Sector: A Maturity Model, Risk Management in the AI Era, and More Than Meets AI.
We hope that this report helps government leaders across the globe implement pathways to use AI in ways that build public trust.