Eric Nguyen, Australian Computer Society

AI isn’t failing, we’re learning it the wrong way

Across Australia’s public sector, interest and experimentation with AI is growing. A recent Joint Committee of Public Accounts and Audit report(Opens in a new tab/window) noted that around 60% of agencies are already using generative AI, with more than 70% identifying opportunities to improve service delivery, decision-making, and policy outcomes.

Yet progress remains uneven. While pilots and experimentation are widespread, many organisations are still working through what it means to embed AI into everyday operations at scale - where it is consistently used across teams, integrated with existing systems, governed responsibly, and delivering measurable impact. This gap underscores a critical point: adoption is not just about technology; it is about how people, teams, and processes work with AI to deliver real value for citizens.

Learning by doing: evidence from the field

Over the past three months, I’ve led hands-on AI and secure “vibe coding” (rapid prototyping with AI support under agreed security guardrails) sessions with 200 participants from across the public/private/not-for-profit sectors, including both technical and non-technical roles. These were not simulations. Teams worked on real-world problems, from service delivery design to rapid prototyping under security and compliance constraints.

Three patterns stood out:

1.    Confidence grows through experience, not instruction. Participants developed competence when they could try, see results, and adjust their approach immediately.

2.    Collaboration amplifies learning. When participants worked side by side - discussing ideas, testing approaches, and observing each other - they moved faster, produced stronger prototypes, and built trust in both the technology and their own judgement.

3.    Momentum continues beyond the session. Many teams continued applying what they learned into their daily work, turning experimentation into practical, measurable improvements.

In short, these sessions showed that adoption happens when people experience AI in context, together, and with guidance, not just when tools are made available.

Understanding AI caution

During these sessions, three common concerns emerged:

•    capability anxiety: “I’m not good at prompting”
•    trust and accuracy concerns: “Can I rely on AI outputs, and are they transparent?”
•    dependence on step-by-step guidance: “I need a structured process so I don’t make mistakes”

These are not resistance. They are natural reactions to unprecedented technology in high-stakes, citizen-facing environments. Public servants, in particular, are responsible for outcomes that affect people’s lives; hesitation reflects caution, not inertia.

What works: lessons for leaders

Anchor AI in purpose, not tools
AI adoption works best when teams focus on real, meaningful problems - citizen services, policy analysis, or operational bottlenecks - rather than learning prompts or mastering the technology in isolation. The tool supports the work, not the other way around.

Combine expert guidance with peer learning
Participants gain confidence when experts provide guidance early, but momentum comes from peer interaction and shared exploration. Open discussion, experimentation, and observing colleagues’ approaches reduce anxiety and build collective understanding.

Embed AI into everyday operations
Real adoption occurs when AI is integrated into daily work, not just pilots. This means consistent usage across teams, connection to operational systems, proper governance, and measurable outcomes. Teams see where AI adds value, and where human judgement remains essential. In government contexts, this means clear guardrails for information handling, privacy, assurance, and accountability.

The leadership imperative

This is not a typical technology rollout. It is a once-in-a-generation shift in how government learns, collaborates, and delivers for citizens. Leaders who succeed are those who:

•    create safe spaces to explore and experiment
•    encourage cross-team learning and knowledge sharing
•    focus on outcomes rather than tool proficiency
•    accept uncertainty as part of the process

AI adoption at scale is less about systems and more about how people experience and integrate it into their work responsibly.

Final thought

For the first time, public servants are not just improving services - they are redefining how services are created and delivered, in real time, with AI as a partner. 

No previous generation has faced this. We are all pioneers - navigating, learning, and shaping the path together. The question is no longer whether AI will be part of public service, it is whether we are willing to step forward, experiment responsibly, and create outcomes that truly serve citizens. Because in public service, being a pioneer is not about moving fast alone,it is about moving forward together - for the good of all.

The views expressed are the author’s and do not necessarily reflect the views of the Department of Finance or the Australian Government.