Technology Analysis

Keeping AI out of evil hands for the benefit of mankind

By | | comments |
While Skynet is a thing of fiction, AI still needs regulating to prevent it from being used by the wrong people (Screenshot via YouTube)

With AI increasingly becoming a dangerous tool in the wrong hands, it's up to us to regulate its use in order to stay in control, writes Paul Budde.

I USE ChatGPT now almost daily for my research — background information that is not available through Wikipedia, Google or other research facilities. The service only provides info that was available before 2021, so it is not at all useful for more recent developments. So in my case, I am looking for what often is mostly obscure information and many things are often just for my own interest.  

In about 75 per cent of the searches, I do find relevant information. In half of those cases, the information is good enough for me to use. Of the other half, I can get the right information by questioning ChatGPT in more detail — which I find a great feature of the service.

The other 25 per cent of my searches result in a message:

‘I apologise, but I do not have any information on [the topic]. I am an AI language model and my knowledge is limited to what has been publicly available up to September 2021.’

Or it makes up its own stories, often extremely convincing. Bear in mind that there is no intelligence whatsoever in AI. All the intelligence that ChatGPT uses comes from information produced by us humans and then only information available online is used by the system to come up with answers.

It is important to recognise the current limitations of the technology, which include:

  • creating plausible sounding but incorrect responses, known as “hallucinations”;
  • a tendency to respond with verbose and long-worded answers;
  • a tendency to make assumptions on ambiguous questions;
  • some time limitations on its knowledge.

We humans are still the best in judging what is right and what is wrong and that is our best defence against misuse of the system by those who want to use it for the wrong purposes.

If you want to do more in-depth research as what I am doing, you must come from a good place yourself to eventually get the right information and know what is correct or not correct.

For now at least, ChatGPT requires significant human insight and oversight to ensure its limitations are addressed and its benefits are maximised.

However, so far as I'm concerned, the service is a very good start for a useful AI service because I get access to information that I can't find in any other way without travelling to archives and libraries.

If in doubt, I start asking more and more questions, if possible, providing more detail as that could lead to improved results. But you need to know the topics that you research in some detail to judge if ChatGPT is making it up by connecting the wrong data. Be aware that ChatGPT is very good at making you think that the information it gives is correct, so you must be careful of that. Somebody who does have the right knowledge about that information can immediately see if it is wrong and that could be very embarrassing.

Of course, the system learns and gets better. But that also gives us – as in our society – time to learn how to deal with AI and what kind of regulation we need to stay in control. As usual, it is the European Union who is leading the charge here, through the Artificial Intelligence Act.

With all new developments, regulation is always lagging. And as with anything, you can use it for good and bad. Unfortunately, there will be plenty of people only too willing to use AI for all the wrong reasons and no matter what we do, it will always be a cat-and-mouse game where the good guys need to stay ahead of the baddies to keep things under control.

Let's hope that this will be the case because I don't see any other way to stay in control based on our democratic/capitalist system.

China has it a bit easier in that respect; they can control things much better, but the question is how far we want to go. I, for one, don’t like a totalitarian system.

So it is of little use to try to ban new technologies or demonise them. It is up to us to ensure that we guide these developments in the right direction. In our democratic system it remains a matter of muddling through, but is important to stay involved in that muddling process.

Pandora's box has been opened and we won't get AI back in the box. Nor would we want to do this. There is so much positive in AI, especially for a world that becomes only more and more complex. So we need to focus on how we keep things under control.

Humans are tool makers and we have been doing that for hundreds of thousands of years, so that is not going to change. Look at our dependence on mobile phones, the internet and other technologies. Look at the medical technologies from pills to vaccines, glasses, hearing aids and bionic body parts. We already entered the human-machine phase a long time ago and if we project that over 50, 100 or 200 years, that will be the future as it develops right under our eyes.

If we fail to control our future, we will probably have a crisis that will do that for us and the clock will be turned back. I remain positive and I believe in a human-machine future where humanity remains in control.

Paul Budde is an Independent Australia columnist and managing director of Paul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter @PaulBudde.

Related Articles

Support independent journalism Subscribe to IA.

Recent articles by Paul Budde
2024 Budget strengthens Australia's digital infrastructure

The 2024 Budget has revealed the Government's commitment to advancing technology ...  
Australia facing looming power shortage crisis in 2027

Strategies are urgently needed to prevent an impending power shortage crisis ...  
Line between cybercrime and cyberwarfare blurring

Advanced technology is creating unparalleled danger in warfare, not only in terms ...  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation


Support IAIndependent Australia

Subscribe to IA and investigate Australia today.

Close Subscribe Donate