Dr Paul Hubbard, a/g First Assistant Secretary, AI Delivery and Enablement Division
The APS AI Plan 2025(Opens in a new tab/window) is explicit: we will 'enhance cross-sector collaboration between government, industry, community and academia' to accelerate technological expertise and innovation, and to “build trust through transparency and codesign,' ensuring initiatives stay people-centred and responsive to real-world needs.
Sometimes collaboration language is read as aspirational—nice, but non-essential. At this time, that reading is a mistake. Cross-sector collaboration is not a decorative value. It is a practical mechanism for getting the hard parts right.
Why collaboration is more than a 'nice to have'
If the goal were simply to 'add AI' to existing workflows, maybe collaboration could be optional. But the real opportunity (and risk) is that AI changes the economics of how information moves through systems—how it is produced, verified, summarised, and acted on. That often means we should not be asking only: 'How do we bring AI into our existing processes?' We should also be asking: 'Which of our systems, workflows, and assurance arrangements now need to be redesigned?'
A simple example illustrates the point. If I use AI to write a longer submission to a policy consultation, and you use AI to summarise that submission back into a few dot points, we have both 'used AI'. But have we improved communication? Or have we simply increased the volume circulating through the system, with less shared understanding and no agreement on what matters?
Cross-sector collaboration matters here because the redesign question cannot be answered by government alone. The people who build the tools, the people who are affected by the outcomes, and the people who set the rules all see different parts of the system. If we want to modernise safely and at pace, we need to bring those perspectives together early.
What I learned from the AI CoLab: evidence from the 'third space'
Over the last year, through the AI CoLab, I co-hosted more than 100 workshops with 1,800+ cross-sector participants spanning government, industry, academia, and civil society. The AI CoLab is a practical 'third space': a structured environment outside day-to-day organisational silos, designed to test ideas, compare assumptions, and build shared understanding.
Across that volume of engagement, several lessons kept repeating—lessons that go directly to why the APS AI Plan puts collaboration at the centre.
Lesson 1: Collaboration breaks the governance Catch-22
A persistent barrier in the APS is the governance paradox: teams feel they must wait for safeguards to be fully defined before experimenting, yet safeguards cannot be defined well without experimentation. Collaborative spaces lower the threshold for responsible learning. I start every workshop saying that the goal is to learn one new thing about AI, and to meet one new person. Psychologically safe settings mean that we learn rather than fail. That evidence then strengthens governance, because it turns abstract principles into implementable controls, patterns, and decision points in specific contexts.
Lesson 2: The most valuable learning happens when builders and reformers share the same room
Many of my favourite workshops paired technical practitioners with policy, legal, service delivery, and reform perspectives. Technologists help explain what is feasible and what is not. Reformers help locate second-order impacts: where accountability sits, how citizens experience decisions, what needs transparency, and what constitutes harm in context. The result is better design: fewer “solutions looking for problems,” and fewer policy settings that sound good but cannot be executed. The best AI CoLab workshops always have people who confess to “knowing nothing about AI” asking some of the best questions.
Lesson 3: Systemic barriers show up at the seams
Some of the biggest constraints are not “inside” any single agency: fragmented data exchange, inconsistent interpretations of legal settings, procurement and contracting frictions, and accountability models built for a pre-AI world. Collaboration does not magically remove these barriers, but it makes them visible and legitimate. Once a barrier is recognised as systemic rather than local, it becomes possible to coordinate remedies, rather than asking each agency to solve the same problem independently.
Lesson 4: Cultural context is essential infrastructure
Cross-sector dialogue—particularly with Culturally and Linguistically Diverse (CALD) and First Nations voices—reinforce cultural context cannot be bolted on late. If systems exclude minoritised languages or do not respect Indigenous Data Sovereignty, AI can widen the digital divide even while claiming efficiency. Trust is not a communications exercise; it is a result of how we work together. Codesign and transparency are how we build systems that are safe and legitimate in the communities they serve.
Lesson 5: AI adoption requires system leadership, not just project delivery
Finally, the workshops repeatedly pointed to a leadership shift: from viewing AI as a tool rollout to viewing it as system change. Leaders need to ask how AI is already affecting citizens, suppliers, regulators, and staff—and how value and risk accumulate across the entire service chain. Just as importantly, leaders need to model curiosity: building enough hands-on literacy to set meaningful guardrails and make confident decisions.
The practical takeaway
If we treat cross-sector collaboration as optional, we will move slower and take on more risk—because we will learn the same lessons repeatedly, in isolation, and often too late. If we treat it as a core delivery mechanism—structured, repeatable, and linked to governance—we accelerate capability, strengthen trust through transparency and codesign, and improve the odds that AI adoption produces outcomes rather than activity.
The APS AI Plan 2025 is clear on the direction. The practical next step is to operationalise collaboration: create repeatable forums where teams can test use cases, surface systemic barriers, and translate principles into implementable controls. One accessible way to start is to participate in a cross-sector workshop—observe one, then propose one yourself—bringing one live process or decision point you want to redesign. Upcoming AI CoLab sessions for 2026 are available via events.aicolab.org(Opens in a new tab/window).
It’s time to turn collective intent into collective momentum.