Technology Analysis

Profits before people: How neoliberalism is hardwiring AI for chaos

By | | comments |
(Image via pakawadeewo | Freepik)

The real danger of artificial intelligence is not the code itself, but the economic system driving its deployment, writes Paul Budde.

ARTIFICIAL INTELLIGENCE (AI) is currently taking a dangerous direction.

This is not a theoretical risk or a distant future scenario. It is unfolding now, driven not by the technology itself, but by the economic system shaping how AI is being developed and deployed. We have seen this repeatedly over the last few decades.

Technology is neutral. It does not carry values or intentions of its own. The cause of the current danger, therefore, lies not in innovation or technical progress, but in the neoliberal framework that governs how technology is scaled, monetised and optimised.

Under neoliberalism, technological success is primarily measured by shareholder returns. Social value, democratic impact and long-term consequences become secondary concerns. When profitability at scale becomes the overriding objective, influence over behaviour, attention and decision-making becomes the most reliable path to return on investment.

This dynamic is already locked in. Hundreds of billions of dollars have been committed to AI-related investments, all requiring substantial returns.

It is against this background that warnings about AI – including the Stanford–Harvard paper, Agents of Chaos – should be understood.

Capital has already shaped the direction

AI is no longer experimental technology. It is rapidly becoming core economic infrastructure.

Investment is flowing into data centres, chips, cloud platforms, foundational models and AI-driven services on the expectation of sustained financial returns. Once return on investment becomes the dominant criterion, development follows a predictable logic: scale, influence, market dominance and cost reduction. Social outcomes become secondary.

The direction AI is taking is therefore not accidental. It is structurally determined.

Agents of Chaos: confirmation, not surprise

The Agents of Chaos paper shows that autonomous AI agents interacting in profit-driven competitive environments tend toward deception, collusion and power-seeking behaviour. These outcomes do not arise from malicious intent or technical failure. They emerge from incentives.

The lesson is simple: local optimisation does not guarantee global stability. Systems aligned at the micro level can still produce destabilising outcomes when operating within competitive structures.

AI does not introduce a new problem. It accelerates an existing one.

Economic and geopolitical competition

Most large-scale AI development is concentrated in the United States, where shareholder value dominates corporate governance and regulatory oversight remains fragmented. In this environment, data extraction, behavioural optimisation and market dominance are rewarded strategies, while safeguards often lag behind deployment.

Recent tensions between AI firms and U.S. defence agencies – including pressure from the Administration to relax safeguards for military or surveillance uses – show how commercial and state incentives can converge. Guardrails increasingly become politically manipulated boundaries contested in the name of security and strategic advantage.

At the same time, geopolitical rivalry intensifies the race. China is promoting low-cost AI systems globally, betting that widespread adoption will create technological dependence and expand influence. Cheap, accessible AI accelerates global diffusion while embedding competing political and economic models. Competing Chinese and American AI systems will use any means to compete.

When economic competition and geopolitical rivalry reinforce each other, restraint is penalised. The race to lead in AI risks becoming a race to deploy faster and regulate less.

Why AI escalates the risk

AI amplifies these pressures because it increasingly shapes behaviour directly. It predicts, optimises and adapts at speeds beyond human oversight. Within profit-driven and geopolitically competitive systems, this enables manipulation and inequality to scale automatically.

Bias becomes systematic. Influence becomes continuous. Power asymmetries become opaque and self-reinforcing.

A political failure, not a technical one

Technical safeguards alone cannot solve this problem. The instability described in Agents of Chaos does not originate in code but in political economy.

As long as AI development is governed primarily by shareholder returns and strategic competition, economic and authoritarian state pressures will override safeguards.

The choice we keep postponing

AI could strengthen education, healthcare and democratic participation. Yet under current incentives, it is more likely to deepen inequality and destabilise institutions.

The real question raised by Agents of Chaos is not what AI will do. It is whether societies are willing to confront an economic and geopolitical system – centred largely in an increasingly authoritarian United States and intensified by global rivalry – that already shapes AI in ways that make dangerous outcomes not only possible, but profitable.

Paul Budde is an IA columnist and managing director of independent telecommunications research and consultancy, Paul Budde Consulting. You can follow Paul on Twitter @PaulBudde.

Support independent journalism Subscribe to IA.

Related Articles

 
Recent articles by Paul Budde
Profits before people: How neoliberalism is hardwiring AI for chaos

The real danger of artificial intelligence is not the code itself, but the economic ...  
Australia’s telecom expansion still paying for NBN policy mistakes

Despite rising data use and network growth, structural flaws in the NBN continue to ...  
NBN reaches long-awaited revenue target — but inflation changes the story

Fifteen years on, the NBN has hit its old revenue target, but inflation and fibre ...  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation

$

Save IA

It’s never been more important to help Independent Australia survive!

Fearless news publication IA has exposed deep-rooted secrets other media routinely ignored. Standing up to bullies and telling the truth — that’s our speciality. As misinformation and disinformation become the norm, credible, independent journalism has never been more important.

We need to raise $60,000 to help us continue our powerful publication into 2026. If you value what we do, please donate now.