While leaders fumble climate action, the unchecked rise of superintelligent AI could eclipse every threat we know, writes Mark Beeson.
WELL-MEANING friends frequently tell me not to be so pessimistic, especially about our collective future. What’s the point of writing alarmist articles about the environment, for example, when “ordinary” people can do next to nothing to influence how a global problem plays out?
Good point, which may explain why so many choose to tune out instead.
At a time when pointless, anachronistic conflicts kill tens of thousands in Ukraine and Sudan, and a genocidal slaughter proves difficult to stop in Gaza, it’s hard not to turn away in despair. Likewise, when the world’s most powerful countries and largest polluters fail to even attend COP30 in Brazil, we can be forgiven for not being confident about the prospects for much-needed cooperation at the international level.
The good news is that such familiar threats to our security may not be quite as bad as we thought. The bad news is that it’s because there’s an even greater potential problem: the creation of a superintelligent form of artificial intelligence (AI). As Yudkowsky and Soares, the authors of one of the scariest and possibly most important books ever written, put it, If Anyone Builds It, Everyone Dies.
At this point, I should confess that I’m not qualified to comment on the technical aspects of this argument. Much the same could be said about my grasp of climate science, but – the stupendous ignorance of the most powerful man in the world notwithstanding – most informed observers do not doubt the causes and potential consequences of climate change. I’m happy to take their word for it.
Climate change is happening more quickly than even some of the pessimists expected, but we are encouraged to believe that the worst effects could be avoided if either market forces make renewables too cheap to resist, or states cooperate to make sure fossil fuels are rapidly phased out. At least we know what we’re up against and that possible remedies do exist, even if many leaders studiously ignore them.
Unfortunately, the same cannot be said about “artificial superintelligence” (ASI). One of the problems – which also shapes responses to the climate crisis, of course – is that some people are making colossal amounts of money from business as usual. It is no coincidence that some of the most wealthy and powerful political figures around U.S. President Donald Trump are the tech bros, some of whom harbour delusional views about the future and their place in it.
Luckily, not all of the leaders in the race to exploit the potential of ASI are quite as self-absorbed. Nobel laureate Geoffrey Hinton, the so-called Godfather of AI, left his job at Google to warn of the imminent, poorly understood dangers that may emanate from the increasingly rapid, lavishly funded development of ASI. Not only is the private sector pouring huge sums into this research, but so are states. Everyone seems obsessed with not being left behind and letting competitors reap financial or strategic rewards.
The crucial danger that flows from these efforts, according to Yudkowsky and Soares, is an ‘alignment problem’, in which the preferences of a ‘mature’ ASI are ‘vanishingly unlikely to align with our own’. In other words, not only will ASI be much smarter than us very soon, but there is no reason to suppose they will think as we do or consider our interests ahead of their own.
Another scientific luminary, James Lovelock of Gaia thesis fame, thinks that in the future the Earth will be populated by cyborgs — inorganic life-forms that will be vastly more intelligent than us. We’ll be lucky if they keep us as pets.
Given humanity’s predilection for violence and apparent inability to cooperate at the speed and scale necessary to address problems like climate change, cynics might argue they could hardly be less intelligent than us.
The immediate existential danger, according to Yudkowsky and Soares, is not just that ASI will be smarter than us and develop its own alien preferences, but that it may be able to ‘escape’ onto the internet and more easily pursue its goals, none of which are likely to include looking after human beings. I’m not sure what “escaping onto the internet” actually means, but it doesn’t sound good.
Don’t worry, though, ‘the issue is not that AIs will desire to dominate us; rather, it's that we are made of atoms they could use for something else’. Not entirely reassuring, either. Given such apparently imminent threats, Yudkowsky and Soares argue that AI research should simply stop immediately until we know precisely what we are doing and have a better understanding of the risks we are running.
Given that Trump sees technological development as a chance for personal enrichment and Russian President Vladimir Putin thinks it’s a potentially critical source of battlefield advantage, it is difficult to see that happening. But still, look on the bright side: at least we may not have to worry about climate change for much longer.
Mark Beeson is an adjunct professor at the University of Technology Sydney and Griffith University. He was previously Professor of International Politics at the University of Western Australia.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Australia License
Support independent journalism Subscribe to IA.
Related Articles
- AI is re-wiring the network: Why Australia must rethink its digital and energy future
- Chatbot psychosis: The human cost of AI companions
- Government poised to betray creators in AI copyright grab
- Australia’s AI push needs more questions, fewer promises
- Artificial Intelligence must be codified with a conscience







