Extreme caution must be taken when humans experiment with transhuman and posthuman technology, writes Paul Budde.
FOR THOSE INVOLVED in technology from a government and industry perspective, as well as from a user’s point of view, we all have a responsibility to monitor developments in this space to ensure they're utilised for the benefit of society.
If we brought a person from the Stone Age, an early farmer from Mesopotamia, a Greek philosopher, a Renaissance merchant from Florence and one of us together, blindfolded, and we started chatting around a campfire, we would very quickly find out we have a lot in common.
After a few pints, we will sing together and rapidly end up in some jovial embracing. At this level, very little has changed in the evolution of humans.
What has changed is the environment we live in using the tools we've developed. To dig a bit deeper here, it is amazing to see that with the assistance of technology, the quality of human life has enormously improved. Even more mindboggling is the fact that most of that happened in the last 50 years.
Our consciousness is what makes us human. With the ongoing and ever faster technological developments, we are less and less dependent on our body, organs can be transplanted, and other tools can enhance our biological and cognitive functions.
If these developments continue, why would we need a body? Aristotle asked that same question about 2,500 years ago.
Coming back to the meeting around the campfire, even with that varied group chatting, we still would not be able to find answers to the big questions of life.
We wouldn't discover the exact meaning of life, free will, what is truth and so on. In contemporary times, we can add issues of democracy, fake news, conspiracy theories, social media echo chambers, populism and totalitarianism to the list. It looks like the human mind is ill-prepared to tackle them.
What is needed for us to improve on our current situation?
If history is a good measure, then it is doubtful that humans in another 10,000 years time would be much different from us. Though, our cognitive limitations are already a problem for the big crises facing us today, let alone in the future.
It is not the technology that stops us from addressing these major issues, but the cognitive limitations of humans to deal with these situations.
But we clearly are at the doorstep of an inflection point, as new technology develops to change what it means to be human.
The tools on the rise today would seem to enhance our cognitive capacities. Over centuries and millennia, it is certain that our tool-making capacity will create bigger and better environments. It'll be harder to argue that humans will remain the same.
The tools we are creating and other developments that are around the corner indicate a logical and rational direction towards transhumanism.
So far, we have been able to stay in control of the technology we have developed. However, self-learning algorithms and developments in machine learning, DNA engineering, biotechnics, neuro-technologies and quantum mechanisms relating to our consciousness are all opening Pandora's box.
Can we still stay in control? It looks like that, as a global society, we seem to lack the cognitive quality needed to manage these processes in the long term.
If we work on lifting our cognitive qualities, we need to do this together, in a collaborative way. The alternative could be catastrophic.
Do we first need a crisis to build global consensus? Will that be too late? Will our innate warring instincts lead to selected groups of transhumans?
As both Stephen Hawking and Ray Kurzweil have argued, we need to face these challenges, otherwise we will be outcompeted by whatever transhumans or posthumans that'll arrive on the scene.
Professor Stuart Russell lists three principles to guide the development of beneficial machines. He emphasises that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers.
The principles are as follows:
- The machine's only objective is to maximise the realisation of human preferences;
- The machine is initially uncertain about what those preferences are; and
- The ultimate source of information about human preferences is human behaviour.
We should not use artificial intelligence and other technologies to solve our complex problems, instead, we should concentrate on developing technologies that equip us better to solve these problems faster and more effectively.
Last week, the Australian Government announced that it wants to become a global leader in developing and adopting responsible artificial intelligence (AI). For this, $124.1 million has been set aside which includes the establishment of a National Artificial Intelligence Centre within the CSIRO, four AI and digital capability centres, and a next generation AI graduates program.
Let us hope that humans, while still in charge, will adhere to those principles set out above, to further develop already unstoppable transhuman and potentially posthuman technologies.
Paul Budde is an Independent Australia columnist and managing director of Paul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter @PaulBudde.
Related Articles
- Technology: Doomsday or godsend — the choice is ours
- The digital and sharing economy is unstoppable
- Harnessing technology to connect us and grow the economy
- The benefits and risks of AI and post-human life
- The pitfalls of social media and technology
Support independent journalism Subscribe to IA.