As government and industry sprint toward an AI-powered future, it's the unanswered questions, not the glowing promises, that should give us pause, writes Ross Monaghan.
HAVING GIVEN the world a taste of the possibilities of artificial intelligence, the AI sector is now in full-throttle sales mode with government and industry, using sales techniques as old as time: hurry, buy now or you’ll miss out.
“Responsible” AI will solve all our pressing problems, according to the AI in Australia Economic Blueprint, published this week and produced by a consulting firm on behalf of OpenAI.
It clearly aims to encourage government and regulators to rush AI adoption.
Australia could harness AI to significantly boost economic prosperity, enhance public services and position itself as a leader, according to the blueprint.
The report says:
‘The economic winners of the AI era will be those who make deliberate choices today.’
What’s significant, however, is what the report doesn’t say.
While it paints a glowing picture of AI adoption, OpenAI’s CEO, Sam Altman, highlighted the problems in OpenAI’s podcast this week.
“People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don’t trust that much,” Altman said about ChatGPT, in what amounts to the fine print of the report.
When future problems are inevitably found with the accuracy of AI services, the industry will be able to point to early public proclamations of problems with the technology and say, “We told you so.”
But even Altman seems to be hallucinating when talking about AI. Despite his claim that people trust AI, evidence suggests otherwise.
A 2024 research report in the UK by The Alan Turing Institute found that people under 35 are the most trusting of AI, but even 59% of that demographic do not trust it. More worrying for governments is research by the Office of the Prime Minister and Cabinet in 2023, which found that more Australians distrust the Government's ability to responsibly use AI to provide personalised services (34%) than trust the Government (29%).
Regardless of trust, while there are billions of dollars to be made, the promise of AI will be pushed frantically by the sector that dominates some of the world’s biggest organisations. The top six organisations in that list produce silicon chips, own the data centres and infrastructure, or sell business and consumer products reliant on AI.
For those not in the sector, rushing to commit to AI adoption can be an irresistible lure. The “tech bros” are targeting government services and sectors that require significant human input. For leaders wanting to quickly reduce costs, replacing people with technology that works 24/7, 365 days a year, looks feasible. Long term, however, the ramifications will be profound.
Data security, privacy, equity, stakeholder engagement and other issues will inevitably emerge, and without a human workforce to deal with these challenges, who will solve them?
For businesses with significant brand and reputation risks, relying on the same AI products and services as their competitors will lead to bland strategies, communication and products.
In classrooms, students could use AI services to supercharge their learning, but most use them to quickly and effortlessly produce assessments while learning little. Outside the classroom, many of their friends are AI chatbots, so their ability to understand and empathise with others, let alone engage with them, is limited.
University students graduating with little more than the skill of being able to prompt AI, and experienced professionals in the workplace being replaced by humans with AI, will lead to a worldwide skills shortage and growing inequity in society.
AI will certainly play a substantial role in government, business and society in the future, but racing towards adoption without thorough analysis and policy is dangerous.
Too much is at stake. The risks of rushing into AI adoption are significant. We’ve already seen damage to society from the quick and unregulated adoption of social media, as the Australian Government is learning now, with its world-leading ban on teenagers using social media.
Putting a genie back in a bottle is impossible.
Ross Monaghan is a former journalist, professional communication manager and CEO.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Australia License
Support independent journalism Subscribe to IA.
Related Articles
- Artificial Intelligence must be codified with a conscience
- Before we ask what AI can do, let’s ask what kind of AI we’re talking about
- AI power consumption demands a rethink for energy infrastructure
- DeepSeek gives China breakthrough in AI development
- The humble spreadsheet is about to get a whole lot smarter







