Rapid developments in AI are stirring excitement within the tech world, but also allowing us to realise the harmful ways it can be applied. Paul Budde reports.
ONE DAY LAST WEEK, I received several emails and links to stories that form the basis of this article. It started with an ABC News article I read titled, ‘Twitter is becoming a “ghost town” of bots as AI-generated spam content floods the internet’.
I am part of an international network of ICT experts and I forwarded the article to the group. This led to an avalanche of responses with further information on the massive social, political, economic and technical changes that are cascading as a result of the commercialisation of artificial intelligence (AI).
As mentioned before, the potential of AI is enormous. In our increasingly complex world, we need all the intelligence we can gather to find solutions. AI can be extremely helpful in addressing these issues and finding better and faster solutions. AI will also be very disruptive, most likely significantly more disruptive than previous developments such as the arrival of the internet, the mobile phone and social media.
A video titled AI-Generated Videos Just Changed Forever discusses the new AI video service from OpenAI, Sora. It shows both the advantages and potential disadvantages of this new development. The impact on the film and video industry will be massive. Why use studios if you can make your own video for free? The fallout can also be read in this article.
And why do you need the news media if you can generate your own AI news? That could at least potentially be one of Facebook's (Meta) conclusions as it abandons Australian news content deals — a loss of potentially hundreds of millions of dollars to the industry. (Another reason is that news no longer rates high on Facebook.)
Another worrying development is Russia using AI to interfere in the 2024 U.S. elections.
In an older presentation (2019) titled ‘Hacking Ten Million Useful Idiots: Online Propaganda as a Socio-Technical Security Project’, sociotechnical systems (STS) are discussed. It emphasises the need to address hacking, misinformation and conspiracy theories in a multifaceted way.
From an organisational point of view, this is an approach to address complex organisational work design that recognises the interaction between people and technology in workplaces. The term also refers to coherent systems of human relations, technical objects and cybernetic processes that adhere to large, complex infrastructures. Social society and its constituent substructures, qualify as complex sociotechnical systems.
The negative developments of AI were forecasted and in the presentation, security experts discuss what is needed to protect our society from the misuse of emerging AI-based technologies. As shown, this is not an easy task, as creating misinformation and fake news is far easier than unravelling it. By the time the latter happens, that misinformation is already replicated by bots and gullible people (the “ten million idiots”).
Already, without AI, we have seen how easy it is to spread conspiracy theories and how easy it is for people to benefit from this due to their ideology, political bias, or pure criminality. Add AI to it and it is clear where this development is going.
Targeting gullible people has also become far easier thanks to big data in combination with data science. Targeting these people will automatically spread the misinformation further into society and infiltrate other groups, and suddenly that misinformation is taken up throughout society.
This was another response from one of my American colleagues:
I was recently asked to serve on the board of our local hospital and now get a front-row seat to watch healthcare issues unfold. One of the biggest financial issues for healthcare providers in the U.S. is the tendency for health insurance carriers to reject claims almost automatically, which triggers a paperwork avalanche between patient, provider and insurance company.
Most of the major insurance carriers are now using AI to reject claims. AI is far faster and better than humans at finding small issues in a claim that can cause it to be rejected. We are looking at new hospital management software that will use AI to help pre-qualify the claims before they are sent in, but it is going to be 9-12 months before we can get it fully implemented.
In the meantime, the paperwork avalanche continues to grow.
However, AI is also used to make fraudulent claims, so the above message was followed by the following reaction:
‘It’s a nightmare across the board. Financial healthcare institutions are trying to protect themselves from AI-generated fraud.’
AI has also infiltrated Google Search. A well-documented example is the rise of fake airline customer services. Replicating legitimate airline service, hackers try to get you to rebook flights, which are, of course, fake.
LinkedIn has been infiltrated by bot messages, bringing certain fake services to the top of a search. All of these developments are very worrying. It is important that fewer people fall into the category of “gullible”. The reality, however, will be that most people will become victims in one way or another as most people will increasingly be unable to distinguish what is true and what is fake.
As the truth is fundamental to the way our society works, undermining this will lead to social chaos.
Paul Budde is an Independent Australia columnist and managing director of Paul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter @PaulBudde.
Related Articles
- Artificial intelligence opening new business models
- AI is here to stay, so get used to it
- Artificial intelligence could hold the key to our survival
- Sunak's AI summit: Writing the rules on AI regulation
- AI recipe generators a recipe for disaster
Support independent journalism Subscribe to IA.