Technology Analysis

Australia’s AI paradox: Mass adoption, minimal strategy

By | | comments |
Australia rushes to embrace generative AI, but the data, infrastructure and rules shaping its future remain largely offshore and beyond national control (Background image via Binary Technology and Business | Deviantart, Australia image via Wikimedia Commons)

As generative AI becomes an everyday tool for Australians, new OECD data exposes a dangerous gap between mass uptake and the absence of any serious national strategy to govern it. Paul Budde reports.

NEW OECD DATA shows that more than one-third of people across OECD countries used generative artificial intelligence (AI) tools in 2025.

Among students aged 16 and over, usage rises to around three-quarters. AI has moved from novelty to everyday utility at an extraordinary speed.

Australia is unlikely to lag. If anything, we are probably above the OECD average when it comes to individual use. Australians are enthusiastic adopters of new digital tools, particularly when they arrive embedded in familiar platforms.

But beneath these headline figures lies a more troubling reality: Australia’s AI uptake is largely unplanned, uneven and strategically hollow.

This is Australia’s AI paradox — mass adoption, minimal strategy.

The real divide is institutional, not generational

The OECD highlights a sharp age divide in AI usage and that matters. Younger Australians are using generative AI instinctively, while many older Australians remain hesitant or excluded. As I have argued before in the context of digital exclusion and ageing, technology adoption without structured support risks widening social and economic gaps rather than closing them.

However, the more consequential divide is not generational — it is institutional.

Students are using AI at scale, yet education systems are scrambling to adapt assessment, curriculum and teaching methods. Workers encounter AI tools informally, but reskilling pathways are fragmented or absent. Employers experiment without guidance, while regulators remain largely on the sidelines.

Once again, Australia is allowing technology to race ahead of policy.

We have seen this pattern before — with the internet, with social media and with cloud computing.

Heavy use, zero sovereignty

As I have written previously, Australia has a chronic weakness when it comes to digital sovereignty. We consume digital services enthusiastically, but we exert almost no influence over the infrastructure on which those services run. That weakness is now becoming critical in the age of AI.

Australia currently has almost no control over its data infrastructure. The vast majority of data generated by Australians – including data used to train, fine-tune and operate AI systems – flows through offshore platforms and cloud services owned by a small number of U.S. hyperscalers. Decisions about data access, pricing, model behaviour and risk management are made elsewhere, under legal and political frameworks that do not reflect Australian democratic priorities.

As I have argued before, data control increasingly equates to policy control. When governments lack leverage over the infrastructure layer, regulation becomes reactive and symbolic rather than effective.

Europe chose a different path

Other jurisdictions have taken a more deliberate approach. Europe’s Digital Services Act and related digital regulation offer a more democratic and privacy-focused alternative to the largely market-driven model that dominates in the United States. While imperfect, the European framework explicitly recognises that digital platforms – and now AI systems – shape public discourse, economic power and civic life, and therefore require public-interest governance.

Australia, by contrast, has largely deferred to U.S. regulatory norms without possessing the scale, bargaining power or legal leverage of the U.S. market itself. We have adopted the technology, but not the governance model that might protect citizens, institutions and democratic accountability.

In the context of generative AI, this leaves Australia exposed. We are embedding AI systems into education, business and government while surrendering control over the data, infrastructure and rules that govern how those systems evolve.

Business adoption exposes a deeper weakness

OECD data also shows that firm-level AI adoption remains concentrated in Information and Communications Technology (ICT) and knowledge-intensive industries. Just over 20 per cent of firms reported using AI in 2025, with growth beginning to moderate.

For Australia, this is a warning sign. Our economy is dominated by small and medium-sized enterprises across services, construction, health, education and local government — precisely the sectors where structured AI adoption could lift productivity, and precisely where policy support is weakest.

As I have argued before in relation to digital infrastructure and productivity, Australia too often assumes that “the market will sort it out”. In practice, without sector-based programs, shared frameworks and public investment, adoption remains shallow and uneven.

Education is improvising — again

The OECD figures confirm what educators already know: AI is now a permanent feature of learning. Yet Australia’s education response remains largely reactive.

Teachers are expected to manage AI use without adequate training. Universities oscillate between embracing AI and policing it. Assessment systems designed for a pre-AI world are being stretched beyond their limits. This mirrors the early days of the internet, when institutions were left to improvise while policy lagged years behind reality.

The risk is not that students use AI. The risk is that we fail to teach them how to use it critically, ethically and responsibly.

Strategy is about capability, not control

Australia does not lack discussion papers or advisory bodies on AI. What it lacks is execution.

A credible national approach would include sector-based AI programs, public-interest digital infrastructure, reskilling pathways that include older workers and clear data-sovereignty rules for public-sector AI use.

In relation to energy systems and digital networks, strategy is not about resisting technology. It is about retaining agency.

The OECD data should be read as a warning. The window for shaping how AI is embedded in Australian society is narrowing. Without institutional leadership, Australia risks becoming a nation of enthusiastic users — and strategic bystanders.

Paul Budde is an IA columnist and managing director of independent telecommunications research and consultancy, Paul Budde Consulting. You can follow Paul on Twitter @PaulBudde.

Support independent journalism Subscribe to IA.

Related Articles

 
Recent articles by Paul Budde
Cloud war: Iranian drone strikes hit Gulf data centres

Iranian drone strikes on Gulf data centres show the cloud’s physical infrastru ...  
Profits before people: How neoliberalism is hardwiring AI for chaos

The real danger of artificial intelligence is not the code itself, but the economic ...  
Australia’s telecom expansion still paying for NBN policy mistakes

Despite rising data use and network growth, structural flaws in the NBN continue to ...  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation

$

Save IA

It’s never been more important to help Independent Australia survive!

Fearless news publication IA has exposed deep-rooted secrets other media routinely ignored. Standing up to bullies and telling the truth — that’s our speciality. As misinformation and disinformation become the norm, credible, independent journalism has never been more important.

We need to raise $60,000 to help us continue our powerful publication into 2026. If you value what we do, please donate now.