By Zak Kazakoff, Policy Officer AI Delivery and Enablement Division
Generative AI is polarising
Like many other technologies before it, generative AI is a polarising technology. The debate’s been loud for three years now, and seemingly only getting louder, with many people still wondering what the big deal is and when the bubble will pop. Some parts of the online discourse laments these tools as mindless next-word predictors, spewing misinformation and “logical-shaped” outputs. Other parts adamantly think this is a new era for humanity, that an exponential take-off is imminent, that we will have successfully invented machine superintelligence by 2028.
Again, just as with other technologies before it, I think how things actually turn out will be somewhere in the middle. AI companies aren’t doing themselves a service by marketing their products as “PhD-level assistants”, but equally people aren’t taking seriously the real capabilities of Large Language Model (LLM) technologies. Some of the harshest critics I’ve spoken to about these tools have not really used them – at least not since 3 years ago, back when they were mostly good for converting Shakespearean text into pirate-speak. To better appreciate where things stand, I think the technology should first be separated out from the social context it exists within.
Distinguishing the tech and its context
LLMs like ChatGPT, Claude, and Gemini, leverage a kind of software called a neural network, which leverages a structure inspired by the human brain. LLMs have been trained on an enormous corpus of text to learn statistical associations between words in-context. Then they have been trained with human feedback to provide useful answers in response to queries. Functionally, they are next-word predictors, but that doesn’t seem to impair their utility in any meaningful way – it turns out sophisticated word predictors can do a lot. Yet, the capabilities are jagged – they are poor at some things, and they’re brilliant at others. They’re imperfect, but keep getting better, while still presenting huge value and opportunity right now, as long as you seek it.
To take a brief look at how these tools are playing out in our context – AI companies are run by people, fallible people (like all of us), operating within a market-based system, in a new, open-ended space. While these tools were still in their relative infancy, companies released their products, only for them to be accessed by hundreds of millions of people. Combine the proliferation of generative AI with an internet that has largely been designed to engage rather than inform, where this internet is the main way most of us learn about our world, and the result is lazy AI slop posts on LinkedIn and TikTok, AI Overviews you didn’t ask for, and “AI” seemingly being crammed into every product and marketed relentlessly.
I draw this distinction between the tech and its context because I observe that the immediate, visible, bad uses of generative AI are arising through their interaction with our social context (as any tech will do) – but that this is sometimes leading to a curious dismissal of the tech’s capabilities and positive opportunities. In a similar vein, it’d be like dismissing the engine as useless because cars cause hundreds of thousands of road accidents each year, or dismissing the internet because we observe misinformation echo chambers arise, or dismissing the smartphone because it has facilitated doom scrolling on social media. But for each of these technologies, many positive uses have arisen – and I don’t hear calls for shutting off the internet or banning cars. I wasn’t around when the internet kicked off but I’ve been told it was a wild west, full of scam sites and viruses, and with a stock market bubble that ultimately popped – yet, the internet is still around, and it has matured, warts and all.
Immediate, visible opportunism vs. quiet, high-value adoption
The bad uses of AI are the loud ones, and predictably, the most immediate uses. Of course bad actors will use it to produce misinformation – that’s quick and easy to adopt – meanwhile, figuring out how to embed these tools into work to accelerate productivity takes time and isn’t so visible. What is most visible and immediate has tainted the view of the positive opportunity of these technologies. Generative AI can certainly cause issues, the tool raises concerns and the social infrastructure around it can amplify them. But quietly, what’s happening behind the scenes are productive, high value uses of the technology. These are not regularly broadcasted, nor do I feel they’re particularly interesting. You’re not going to be riveted by me explaining how I was using Claude to do research on the Archives Act, which led to quick generation of a policy note explaining my findings. I’d tell you it was much faster and easier than the ordinary job and produced something of good quality as a first draft that I could iterate on – but I wonder if that really connects.
I use Copilot at work to do all manner of productive things. I regularly use it to find information buried deep in my files and emails – the semantic search capability saves real time. I produced a discussion paper much quicker than I’d otherwise be able to, by drawing on previous discussion papers for Copilot to generate the raw information, while I focus on framing the discussion questions. I had Copilot prepare for me a 1-year chronology of a work project I joined in on, sparing me possibly hours of trudging through uncountable, duplicative emails to make sense of things. I had it produce two thousand words of text describing staff roles based on existing, higher-level information – a task that took 15 minutes, 13 of which were my review, rather than likely hours to do it manually. I’ve accelerated an uncountable number of minor tasks involving reworking blocks of information to change style or format – tasks that were always rote, barely cognitive, but took up decent time, nonetheless.
Perhaps you’d assume that these outputs were hallucinated, misleading, too dodgy to be useful – but that’s not true. They were functional, useful, needing only minor review and editing – which is significantly faster than a manual output. The power of these tools really is far beyond where they were at in 2023. The problem with conveying the value here is that it's contextual – it’s my work ‘stuff’, and maybe you can’t even picture the examples I’m talking about. If it’s hard to describe my job to you under ordinary circumstances, then of course it’d be hard to describe how generative AI is augmenting that job. In contrast, it’s easy to see, understand, and evaluate the impact of AI slop on social media.
The imperative of stewardship and initiative
This technology is highly individualised, highly contextual, and requires creativity on the part of the user to figure out how to adapt it. It’s just an unfortunate reality we see the worst uses, while the positive go unnoticed and largely uncommunicated. I don’t want to downplay the bad uses of AI, nor be reductive to imply this is just a social media slop problem. The point I’ve sought to make is to highlight that there are both pros and cons to this technology, but the discourse right now is fixated on the cons, distorting the view of what these tools are capable of, and that it’s worth giving these tools another go to leverage their benefits for both your personal and work contexts. But the risks are real – disruptive change, weaponised misinformation, hacking, cognitive offload, how universities will function going forward, workforce impacts – and I spend a lot of time thinking about the impacts these may have and what we can do about it. One more thing I’ll add, though, is that the extent to which these risks materialise will be based on the extent to which we abdicate any attempt to guide AI adoption.
As a public service, we should be engaged, to leverage these tools for good while mitigating the risks. A public service equipped and informed about how to use these tools is a public service in a position to understand and help guide how the broader society uses them. I think it’s imperative that we learn the lessons from previous waves such as the internet, the smartphone, and social media – all tools with great potential value that we should aim to leverage, but that resulted in many lingering issues that could have been dealt with sooner.
I present fortnightly Practical AI workshops at the AI CoLab(Opens in a new tab/window) for this reason. We cannot bury our heads in the sand, we must do what we can do keep at the forefront of this technology, both for the sake of our own productivity and value-add, and for the sake of guiding its responsible use. If we don’t take it seriously, then there’ll be fewer good uses and more bad. You can come along to the AI CoLab to learn and share about AI, through open workshops, presentations, and discussions with public service, business, academia, and community – because this technology will affect our society, we should all be talking with frankness and openness about how to approach managing this change.