Technology Analysis

As AI evolves, so must the way we humans think

By | | comments |
Artificial intelligence is advancing at a rate that is difficult for humans to keep up with (Screenshot via YouTube)

As big data gets bigger and AI technologies advance rapidly, we need to adapt our own intellectual capacity to keep up with challenges presented, writes Paul Budde.

AS BIG DATA technologies become more prevalent and artificial intelligence (AI) capabilities continue to evolve, there is great potential for these two areas to intersect and create significant value. Especially in relation to the complex society and environment humanity has shaped around it.  

This complexity has now moved beyond human intellectual capacity. Our education systems, experts and leaderships are no longer able to keep up with this. We know what the problems are but we are unable to come up with the right solutions (democracy, environment, technology, equality). The result is that we now also have to change our thought processes. We can no longer rely on thought processes that we have developed during previous stages of humanity.

To address this complexity, we need to be able to create a reliable and balanced knowledge system on which we can base future decision-making processes. Technology can assist in creating knowledge (data) based systems, using quantum computing for its processing and making executable outcomes accessible with the assistance of AI.

However, with this intersection come both pros and cons that must be carefully considered. Science and technology can assist, but it is we humans who will have to take the lead in designing the best possible system around this.

AI offers the unique capability of providing our society with the combined knowledge of humanity. Over the last century, we have increasingly become more specialised. We have great detailed knowledge of often small areas. Large parts of this knowledge have over the past decades been digitised. AI offers the opportunity to unite that expertise.

One of the key features of big data technologies is their ability to decentralise power. With data becoming more freely available and accessible, and technology platforms democratising the application of data, individuals and communities will have greater access to relevant information. This shift in power, from environmental scientists, consultants, academics and government agencies towards individuals and communities, will have both positive and negative consequences.

On the positive side, this shift in power could lead to enhanced engagement by society in decision-making processes. With more information available, individuals and communities can be better informed, leading to more meaningful consultation processes and greater involvement in decision-making. Ultimately, consensus decision-making might arise, whereby communities decide for themselves on an issue rather than governments. This could lead to new collaborative approaches in decision-making involving a wider range of stakeholders.

However, the potential for “fake data” to infiltrate AI could create negative consequences for this power shift. Issues observed to date include fake and manipulated data in social media metadata and fake images. Deepfakes are going to make AI more susceptible to the broader public application of this technology.

Power decentralisation is also likely to impact government regulators and, in turn, impact the political system. Regulators make the best decisions when they have access to real-time, reliable and accurate data. Conversely, the empowerment of communities could make the regulator’s job more difficult, particularly when overlaid with political pressures from communities that have access to high-quality data.

At an industry level, the disruption AI will have could cause significant changes to supply chains and the business within them. It could also lead to rapid changes in the skills required by the industry. At the lowest end of that change would be in-house reskilling, through to external retraining including further education and right through to cuts in the workforce.

AI is fundamentally about intelligence, which in the words of the evolutionary biologist George John Romanes in 1882:

‘Intelligence, then, I take to be the faculty of adapting our mental representations of the environment to the requirements of the moment by means of flexible combinations of the simpler forms of behaviour which have been previously acquired.’

In other words: Intelligence is the “capacity to do the right thing at the right time”.

Hence, AI is fundamentally about ethics. As AI is based on the transfer of knowledge, a key ethical consideration is the biases contained in that knowledge. One of the issues with the current levels of AI is that it is built on mostly historic data. This data inherently has biases linked to the time it was collected. AI relies upon training machine learning algorithms.

Algorithm bias has recently had some high-profile victims, such as Amazon’s resume analyser. Therefore, the design of AI algorithms must be done with full awareness of potential biases.

AI’s use for knowledge-based outcomes relies upon its ability to be dynamic, accurate and personal. Accuracy is critical to allowing AI to become an authoritative tool in decision-making. Precautions around fake and deepfake data and images will be important to build into AI. Detection processes need to be well-developed and reliable. If trust in the accuracy of the data is harmed, this could break down the potential benefits of AI. Concurrently, we must accept that some inaccuracy is inevitable, and a solid understanding of the technology and algorithms behind AI is required.

AI has the potential to both help and harm people by collecting and representing their data. Privacy concerns around digital data collection, storage and use are already well-understood. For AI to be successful, it must not only rely on real data but also on synthetic data. Synthetic data, generated through computer algorithms rather than being collected from real-world observations or experiments, has both positive and negative impacts on data ethics. Synthetic AI can be trained to be non-identifiable, which could help resolve data ethics concerns currently persisting in the community.

In the end, what we would like to achieve is better, faster and more reliable knowledge that we can use to manage our complex society. To guide such a process, we need experts across all elements of society to collaborate, work together and think through how to shape our future with the aim to create reliable and balanced systems. Only humans can judge issues such as values, ethics and emotionality. Simply creating a “rational” system is not going to work and will make the situation worse rather than better. In many ways, this is a philosophical process rather than a scientific one.

Paul Budde is an Independent Australia columnist and managing director of Paul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter @PaulBudde.

Related Articles

Support independent journalism Subscribe to IA.

Recent articles by Paul Budde
Deloitte predicts significant technology transformation for 2024

In the ever-evolving landscape of technology, media and telecom (TMT), the Austr ...  
Online scams robbing Australians of billions

Australians lost a record-breaking $3.1 billion to scams last year — an 80% ...  
Mind-boggling advances in mind reading neuroscience

Rapid breakthroughs in neuroscience have led to advancements in the ability to ...  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation


Support IAIndependent Australia

Subscribe to IA and investigate Australia today.

Close Subscribe Donate