Artificial Intelligence has uncanny parallels with a gothic novel written more than two centuries ago, which captured a fundamental truth: creation and responsibility are inseparable, writes Paul Budde.
IN 1818, an 18-year-old Mary Shelley published Frankenstein, a novel that reads today less as gothic fiction and more as a warning about technological ambition.
At its centre is Victor Frankenstein, a scientist who succeeds in creating life — but fails to take responsibility for it. His downfall is not the act of invention, but his refusal to deal with its consequences.
That failure proves catastrophic.
Shelley was writing during the Enlightenment, an era built on faith in reason and progress. That belief still shapes how we think about technology today. We are told that technology is neutral — merely a tool. Its effects depend on how it is used.
But this is increasingly untenable.
Technologies are shaped by the systems that create them — commercial incentives, political interests and social dynamics. They are not neutral once deployed at scale. They actively shape behaviour, institutions and even truth itself.
Artificial intelligence is a case in point
AI systems are being developed at an extraordinary speed, introducing capabilities that are not fully understood. Yet the structures driving this development are dominated by competition: speed, scale and market control. Ethics is acknowledged, but rarely decisive.
The result is a growing responsibility gap.
No single actor fully owns the consequences. Governments lag behind, regulation is reactive, and corporations operate under pressure not to fall behind competitors. Responsibility is dispersed — and therefore weakened.
We have seen this before
The rise of social media platforms was initially celebrated as a force for connection and democratisation. Instead, it has contributed to misinformation, polarisation and the erosion of democratic norms. These outcomes were not entirely unforeseen — but they were not seriously addressed when it mattered.
Even now, meaningful reform struggles against business models built on engagement and growth.
This is not just a failure of foresight. It is a failure of responsibility.
Shelley’s insight remains strikingly relevant. Frankenstein’s mistake was not innovation, but abandonment. He created something powerful and then stepped back, leaving society to deal with the consequences.
Today, that pattern is institutionalised. Innovation moves fast; accountability follows slowly, if at all.
If technology is not neutral, then responsibility cannot be optional.
This requires a shift in thinking. Ethical considerations must be embedded from the start, not added later. Regulation must anticipate, not react. And we need to question the assumption that faster innovation is always better.
More than two centuries ago, Shelley captured a fundamental truth: creation and responsibility are inseparable.
In the age of AI, that truth has become urgent
We are no longer dealing with isolated inventions, but with systems that shape societies. If we continue to innovate without accountability, we risk repeating Frankenstein’s mistake — this time not in fiction, but at scale.
The question is no longer whether we can build these technologies.
It is whether we are prepared to take responsibility for what follows.
Paul Budde is an IA columnist and managing director of independent telecommunications research and consultancy, Paul Budde Consulting. You can follow Paul on Twitter @PaulBudde.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Australia License
Support independent journalism Subscribe to IA.
Related Articles
- The 2025 word of the year is not a word — it is AI
- AI ethics & governance: What we can expect from AI policymaking
- The extended mind — from savantism to artificial intelligence
- The new authoritarianism: AI turns convenience into control
- AI could shatter Australia's labour market







