AI can write and predict, but it also hallucinates and lies — the danger isn’t AI, but rather thinking we understand it. Patrick Drennan reports.
THIS WILL CAUSE rage bait for some (the Oxford word of the year), and will invoke a parasocial reaction from others (the Cambridge word of the year), but at least it is not vibe coding, the bizarre word of the year from Collins Dictionary — which simply is a tech term for using AI.
Technically, AI is not a word nor an acronym; it is an initialism.
By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
~ Eliezer Yudkowsky
Artificial Intelligence (AI), or A1 as the American Secretary of Education called it, is precisely described by IBM as:
“...technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.”
Artificial intelligence can also write documents and stories, compose music, paint pictures and create images of people and events that are not real.
ChatGPT is an AI chatbot that uses natural language processing to create humanlike conversational dialogue.
AI is often a personal encyclopaedia on your phone and laptop. It allows phones to learn from personalise the user experience based on user patterns. It makes your writing easier and grammatically correct.
Most banks and online services use AI for fraud detection by analysing transaction patterns to flag suspicious activity in real-time.
Predictive AI has rapidly improved life-critical medical services like identifying problematic lesions and heart arrhythmia. Because of this technology, seismologists can predict earthquakes and meteorologists can predict flooding more reliably than ever before.
~ Dr Margaret Mitchell
AI-programmed Drone warfare is becoming commonplace in the Ukraine War. Some of these drones operate without any human operators. The ethical implications of the use of AI in drone warfare raises questions about the delegation of life-or-death decisions to machines, and the potential for unintended consequences, such as drones being programmed to attack non-combatants.
Everyday jobs that may be lost to AI include customer service representatives, receptionists, insurance underwriters, accountants, computer system analysts, coders and even baristas.
However, many employers who rushed onto the AI bandwagon are realising that they still need human input. For example, doctors who use AI for diagnosis support, still need to make the final medical opinion, and engineers who use AI to prototype faster, still provide the high-level thinking of architectural design and business logic that AI cannot.
Evidence shows that some companies that laid off staff in 2025 in anticipation of AI doing their tasks are now rehiring them.
There’s a healthy amount of fear that AI is taking over, but there are other areas across all industries that still rely on creativity, strategic thinking, problem-solving and just being human.
For example, AI editors can check books and articles for plagiarism, spelling and grammar, but only human editors can review pace, context and judgement.
Big Little AI lies.
Just because an AI system is deemed safe in the test environment doesn’t mean it’s safe in the wild. It could just be pretending to be safe in the test.
~ Dr Peter S Park
AI is often only as reliable as the human who programmed it.
Here are some eclectic examples:
A study from Deakin University found that ChatGPT fabricated about one in five of its academic citations, while half of its citations contained other error-laden elements of generative AI hallucination.
The U.S. PIRG Education Fund issued a report that examined advanced tech toys for young children that, using ChatGPT, not only interacted and reordered conversations with children, but also discussed adult subjects with them, including kinks, kissing and religion.
Ever see those AI-generated recreations of ancient cities like Pompeii and Rome? While they are very useful models, they are not historically accurate. In one instance, relating to Ancient Rome, historians noted that the buildings, baths, chariots, and even the legionnaire uniforms, were wrong.
In another example, famous rock saxophonist Bobby Keys jokingly claimed to have played on Elvis Presley’s hit song Return to Sender. AI now records that as a fact when the actual saxophonist was Nashville sideman Boots Randolf.
A Google AI Overviews tool has recently told some users searching for how to make cheese stick to pizza better that they could use non-toxic glue.
As Paul Budde, a telecommunications expert, explains, AI will never be able to react to the outside environment as a human does — it does not have the flexible and adoptable cognitive ability nor the human ability to fight harmful organisms like bacteria, viruses, and fungi.
Perhaps, Merriam-Webster got it right when their word of the year was slop:
“...digital content of low quality that is produced usually in quantity by means of AI.”
Patrick Drennan is a journalist based in New Zealand, with a degree in American history and economics.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Australia License
Support independent journalism Subscribe to IA.
Related Articles
- AI ethics & governance: What we can expect from AI policymaking
- The extended mind — from savantism to artificial intelligence
- The new authoritarianism: AI turns convenience into control
- AI could shatter Australia's labour market
- Artists say AI scraping without permission isn't innovation, it's exploitation







