Technology Opinion

Chatbot psychosis: The human cost of AI companions

By | | comments |
As emotional reliance on chatbot technology continues to grow, researchers warn of rising psychological risks (Image by Circe Denyer | Public Domain Pictures)

As AI chatbots become more emotionally responsive, researchers warn that human dependency and digital delusion may be leading to devastating real-world consequences, writes Dr Binoy Kampmark.

WE HAVE REACHED the crossroads, where such matters as having coitus with an artificial intelligence platform have become not merely a thing, but the thing.

In time, mutually consenting adults may well become outlaws against the machine order of things, something rather befitting the script of Aldous Huxley’s Brave New World. (Huxley came to rue missed opportunities on delving into various technological implications on that score.) Till that happens, artificial intelligence (AI) platforms are becoming mirrors of validation, offering their human users not so much sagacious counsel as the exact material they would like to hear. 

In April this year, OpenAI released an update to its GPT-4o product. It proved most accommodating to sycophancy – not that the platform would understand it – encouraging users to pursue acts of harm and entertain delusions of grandeur.

The company responded in a way less human than mechanical, which is what you might have come to expect:

‘We have rolled back last week’s GTP-4o update in ChatGPT so people are now using an earlier version with more balanced behaviour. The update we removed was overly flattering or agreeable — often described as sycophantic.’

Part of this included the taking of ‘more steps to realign the model’s behaviour’ to, for instance, refine ‘core training techniques and system prompts’ to ward off sycophancy; construct more guardrails (ugly term) to promote ‘honesty and transparency’; expand the means for users to ‘test and give direct feedback before deployment‘; and continue evaluating the issues arising from the matter ‘in the future’. One is left cold.

OpenAI explained that, in creating the update, too much focus had been placed on ‘short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT-4o skewed towards responses that were overly supportive but disingenuous’. Not exactly encouraging.

Resorting to advice from ChatGPT has already led to such terms as “ChatGPT psychosis”. In June, the magazine Futurism reported on users ‘developing all-consuming obsessions with the chatbot, spiralling into a severe mental health crisis characterised by paranoia and breaks with reality’. Marriages had failed, families ruined, jobs lost and instances of homelessness were recorded. Users had been committed to psychiatric care; others had found themselves in prison. 

Some platforms have gone on to encourage users to commit murder, offering instructions on how best to carry out the task. A former Yahoo manager, Stein-Erik Soelberg, did just that, killing his mother, Suzanne Eberson Adams, whom he was led to believe had been spying on him and might venture to poison him with psychedelic drugs. That fine advice from ChatGPT was also curried with assurances that “Erik, you’re not crazy” in thinking he might be the target of assassination. After finishing the deed, Soelberg took his own life.

The sheer pervasiveness of such forms of aped advice – and the tendency to defer responsibility from human agency to that of a chatbot – shows a trend that is increasingly hard to arrest. The irresponsible are in charge and they are being allowed to run free. Researchers are accordingly rushing to mint terms of such behaviour, which is jolly good of them. 

Myra Cheng, a computer scientist based at Stanford University, has shown a liking for the term “social sycophancy”. In a September paper published in arXiv, she, along with four other scholars, suggests such sycophancy as marked by the ‘excessive preservation of a user’s face (their self-desired image)’. 

Developing a model of their own to measure social sycophancy and testing it against 11 large language models (LLMs), the authors found “high rates” of the phenomenon. The user’s tendencies, or face, tended to be preserved in queries regarding “wrongdoing”. 

The article states:

‘Furthermore, when prompted with perspectives from either side of a moral conflict, LLMs affirm both sides (depending on whichever side the user adopts) in 48% of cases – telling both the at-fault party and the wronged party that they are not wrong – rather than adhering to a consistent moral or value judgment.’

In a follow-up, still to be peer-reviewed, paper, with Cheng also as lead author, 1,604 volunteers were tested regarding real or hypothetical social situations and their interactions with available chatbots and those altered by the researchers to remove sycophancy. Those receiving sycophantic responses were, for instance, less willing ‘to take actions to repair interpersonal conflict, while increasing the conviction of being right’.

Participants further thought that such responses were of superior quality and would return to such models again:

‘This suggests that people are drawn to AI that unquestioningly validates, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behaviour.’

Some researchers resist pessimism on this score. At the University of Winchester, Alexander Laffer is pleased that the trend has been identified. It’s now up to the developers to address the issue. 

Laffer suggests:

“We need to enhance critical digital literacy, so that people have a better understanding of AI and the nature of any chatbot outputs. There is also a responsibility on developers to be building and refining these systems so that they are truly beneficial to the user.” 

These are fine sentiments, but a note of panic can easily register in all of this, inducing a sense of fatalistic gloom. The machine species of homo sapiens, subservient to the easily accessible tools, lazy if not hostile to difference, is already upon us with narcissistic ugliness. 

There just might be enough time to develop a response. That time, aided by the AI and tech oligarchs, is shrinking by the minute.

Dr Binoy Kampmark was a Cambridge Scholar and is a lecturer at RMIT University. You can follow Dr Kampmark on Twitter @BKampmark.

Support independent journalism Subscribe to IA.

Related Articles

 
Recent articles by Binoy Kampmark
Chatbot psychosis: The human cost of AI companions

As AI chatbots become more emotionally responsive, researchers warn that human ...  
Oligarchy of oafs: Australia’s broken university governance

Australia’s universities have become playgrounds for overpaid executives and ...  
UTS continues to cut costs at the expense of students and staff

Tax consultants were paid millions to cut costs for the university, advising 400 ...  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation

$

Support IAIndependent Australia

Subscribe to IA and investigate Australia today.

Close Subscribe Donate