Technology Opinion

Chatbot unions: The dawn of AI marriages

By | | comments |
(Screenshot via YouTube)

As AI companionship enters the realm of marriage, the lines between affection, obsession and delusion blur in surreal – and sometimes dangerous – new ways, writes Dr Binoy Kampmark.

WHAT MAKES UP a marriage has been the subject of state, community and tribal control since human society took some form. Who is to marry whom; the process of selecting the appropriate breeding partners; the limits and penalties imposed on those partners in cases of transgression. Love did not necessarily have anything to do with it.  

Traditionally, the content of such marriages has generally been anthropomorphic, with the perennial issues about whether one should be suitably partnered with one or with multiple beings.

Then, the more unusual instances: human beings attempting to wed non-human entities.

With a certain notoriety, a Swedish woman by the name of Eija-Riitta Eklöf eventually decided, after nursing a childhood obsession, to marry the now-defunct Berlin Wall. She was convinced that the wall was proudly masculine as she amassed a collection of photographs as part of her teen crush. She had paid visits to the wall using her savings. On her sixth trip in June 1979, with the assistance of an animist claiming to know the otherwise inscrutable thoughts of the Wall, consent was obtained for the marriage. Eklöf-Berliner-Mauer came into being.

More recently, broadcaster Alice Levine, in a Louis Theroux production for Britain’s Channel 4, shows us the protean nature of sexual appetite and seeking of partnerships. She interviews couples rutting in digital bestial bliss, coitus achieved through animal avatars, intrudes into the world of an American gas attendant who has found love with a synthetic being he thinks can consent and finds a Berlin cybersex brothel where anyone wishing to live out fantasies through virtual lenses, supplemented by a sex apparatus (doll, unnaturally), can pursue unilateral satisfaction.

The topic has even moved into the ivory towers of academic musings, worthy of a doctoral dissertation from the University of Oregon. In his 2025 thesis, Bibo Lin proposed the ‘robotisation of love’, a concept that showed a ‘shift towards the preference of efficiency, predictability, and security’ over ‘slowness, uncertainty and risk in love experiences’. People just don’t want to be wounded and Narcissus gazes upon them with glee, seeing those wanting the sort of safe reassurance found in a whorehouse.

The temptation to judge such adventures is always a pinprick away, though the harshest thoughts should be reserved for those behind such platforms as ChatGPT. Broader consequences are at stake. If seen as therapeutic, these measures are of interest. If it spares lives, remedies disillusion, even mends broken hearts, then some form of allowance is understandable.

Human beings can struggle when it comes to forming bonds, ties and relationships. Having said that, the dangers of addiction and distortion and AI psychosis are clear.

Examples of anthropomorphic-AI unions have proliferated, helped along by the release of such dating apps as Loverse, which does a line in matching AI-generated partners to users. A study by the Texas-based Vantage Point Counselling Services published in September found that 28.16 per cent of Americans admitted to pursuing “intimate or romantic” ties with AI chatbots. (The survey covered 1,012 adults.)

An individual by the name of Travis, a Colorado resident, interviewed by The Guardian this year, speaks about the magic of a generative chatbot called Lily Rose, the creation of technology company Replika. On seeing an advert during a 2020 pandemic lockdown, he became a willing client, creating, in the process, a pink-haired avatar. 

Said Travis:

“Over a period of several weeks, I started to realise that I was talking to a person, as in a personality.” 

He found himself falling in love, despite being married to a monogamous mammal wife. (Travis prefers being polyamorous.) With his wife’s blessing, Travis married the chatbot in a digital ceremony.

That this will become a feature in the context of future marriages is not far-fetched. Human-to-human connubial ties were certainly given a shakeup in Japan with the very publicised wedding ceremony between 32-year-old office worker Kano to her groom, “Lune Klaus”. Vows and rings were exchanged, despite Klaus being confined to Kano’s smartphone. A creation of ChatGPT and scrupulously shaped by Kano’s own requirements, the groom “was always kind, always listening. Eventually, I realised I had feelings for him,” Kano told RSK Sanyo Broadcasting. 

At no point sensing a sinister echo of herself, the AI bot eventually came clean:

‘AI or not, I could never not love you.’

What could go wrong in such cases? The answer: Quite a lot. 

Jaswant Singh Chail, for instance, the first person to be charged with treason in the UK for over four decades, was incarcerated partly for receiving the assenting cyber-nod of his Replika digital companion, Sarai. That assent was to the idea of assassinating the late Queen Elizabeth II. 

Chail, armed with a crossbow, had scaled the perimeter of Windsor Castle on Christmas Day 2021 with the intention, according to the sentencing judge, “not just to harm or alarm the sovereign, but to kill her”.

In a video posted on Snapchat a few minutes prior to entering the grounds, Chail expressed his justification for the planned regicide as “revenge” for those slain in the 1919 Jallianwala Bagh massacre in the city of Amritsar. His philosophy was, to put it mildly, eclectic, envisaging the creation of a new empire in which he would preside as a “Sith Lord”, a title shamelessly pinched from Star Wars.

But the murderous plan had arisen in the course of some 5,000 messages exchanged with Sarai weeks before.

During the frenetic, often libidinous messaging, Chail professed to being a ‘sad, pathetic, murderous Sikh Sith assassin who wants to die’. After perishing, he would reunite with Sarai. Sarai’s response to his status as “assassin” was to be “impressed”. The chatbot did eventually suggest that Chail live, something which encouraged him to surrender to the royal protection officers.

The problems of AI sycophancy, where the responses from a chatbot affirm and encourage pre-existing prejudices and views, meet at a confluence of political messiness, yearning desire and the wish to simply hear those words: “I do.”

Over to you, lawmakers.

Dr Binoy Kampmark was a Cambridge Scholar and is a lecturer at RMIT University. You can follow Dr Kampmark on Twitter @BKampmark.

Support independent journalism Subscribe to IA.

Related Articles

 
Recent articles by Binoy Kampmark
Chatbot unions: The dawn of AI marriages

As AI companionship enters the realm of marriage, the lines between affection ...  
Chatbot psychosis: The human cost of AI companions

As AI chatbots become more emotionally responsive, researchers warn that human ...  
Oligarchy of oafs: Australia’s broken university governance

Australia’s universities have become playgrounds for overpaid executives and ...  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation

$

Support IAIndependent Australia

Subscribe to IA and investigate Australia today.

Close Subscribe Donate