Sorry, we experienced a little error. The development team has been notified and will rectify it shortly.

AI Regulation in Australia: Understanding the Mandatory Guardrails
Sponsored Sponsored

AI Regulation in Australia: Understanding the mandatory guardrails

By | | comments |
(Image via Steve Johnson | Unsplash)

Artificial intelligence (AI) has taken the world by storm, advancing faster than anyone has anticipated. The AI market is expected to reach $407 billion by 2027, and AI-powered tools are being used in almost every industry.

The rise of artificial intelligence has also brought about concerns regarding safety and responsibility. In some communities, like content creation, there has already been massive backlash against artificial intelligence for stealing art, leading to legal action.

Artificial intelligence was previously thought to be ‘uncontrollable’ and aptly dubbed as 'Pandora’s Box' — once unleashed, it can’t be stopped. Fortunately, this may not be true as we’ve seen with recent developments.

The Australian Government has taken centre stage in positioning Australia as a global leader for safe and responsible AI use. The National AI Centre (NAIC) has developed its first iteration of its Voluntary AI Safety Standard to support the government’s efforts.

The proposed standard guides the best practices Australian businesses, sectors, and industries can follow regarding developing, procuring, and deploying AI systems and services. These guidelines may greatly impact enterprises at all levels, from leaders to project managers with a Diploma of Project Management and employees.

The rise of artificial intelligence

Artificial intelligence and computers that can rival humans have been a concept for decades, popularised by media like Terminator and Blade Runner.

In real life, AI began in the 1940s but has seen a huge resurgence in the 21st century due to technological advancements.

When it comes to the rise of artificial intelligence, most people are discussing machine learning (ML) and deep learning (DL) AI. Machine learning involves algorithms and models that allow the AI to learn from data and improve itself.

Deep learning is a form of machine learning. Deep learning AI models are based on biologically inspired neural networks, similar to the human brain, to allow for the AI to learn, process data, and improve.

While artificial intelligence has existed for years, machine learning and deep learning artificial intelligence models have boomed in the past decade. This has led to the rise of artificial intelligence and AI-powered tools, like generative AI.

The importance of AI regulation

AI regulation and guidelines are critical, especially in this moment of history where technology is still advancing. It will help protect people from the risks of the technology, like data privacy, and bias.

Other areas include:

  • Regulations can ensure that artificial intelligence doesn’t infringe on human rights.

  • Artificial intelligence relies on large amounts of data to learn. Proper guidelines will ensure that only ethically sourced data is used for learning.

  • Strict guidelines can ensure that artificial intelligence isn’t learning from biased data or statistics.


(Image via Randa Marzouk | Unsplash)

Regulations can also help increase transparency between AI providers or developers and consumers. It also holds companies and governments accountable for practising ethical standards when utilising artificial intelligence.

We’ve already seen extremely concerning problems arise from the lack of regulations with artificial intelligence and its rapid development. These include:

  • Generative artificial intelligence using content for learning without the permission of creators.

  • Artificial intelligence being used maliciously and/or without the consent of the subject, like deepfakes.

  • Artificial intelligence spreading misinformation due to incorrect or biased data.

Artificial intelligence and Australia

As AI evolves, governments in every corner of the world are struggling to keep up with the pace and implement regulation. This is true for Australia too, where there is no singular overarching law for artificial intelligence.

Instead in Australia, artificial intelligence falls under the jurisdiction of multiple laws. Like the Australian Privacy Act 1988 and the Australian Human Rights Commission (AHRC) AI Principles and Criminal Code Amendment (Deepfake Sexual Material) Bill 2024.

This means that the legal frameworks for handling artificial intelligence are lagging behind and messily spread across various laws. The Australian government in collaboration with the NAIC has proposed a risk-based regulatory framework and voluntary AI Safety Standard aiming to proactively address these issues before it’s too late.

What is the NAIC?

The National Artificial Intelligence Centre (NAIC) was first established in 2021. It was created to help support and accelerate the artificial intelligence industry in Australia and ensure that organisations use the technology ethically, and responsibly.

The National Artificial Intelligence Centre was originally part of the Commonwealth Scientific and Industrial Research Organisation (CSIRO). It has since moved and become the Australian Government’s flagship organisation for artificial intelligence, and is now part of the Department of Industry, Science, and Resources.


(Image via Possessed Photography | Unsplash)

Proposed mandatory guardrails

In early September, Australia’s federal government launched the proposed set of mandatory guardrails for high-risk artificial intelligence, based on the voluntary standard guardrails for organisations using artificial intelligence.

On the page regarding the guardrails, the Australian federal government does state that they believe artificial intelligence can improve social and economic well-being, but the current regulatory systems are not fit for purpose with the risks AI poses.

The Australian Government and NAIC’s proposed guardrails and voluntary safety standards aim to help organisations to:

  • Protect people and communities from harm.

  • Avoid reputational and financial risks.

  • Increase the trust and reputation of AI systems, products, and services.

  • Align with legal needs and the Australian population’s expectations.

  • Operate more seamlessly in an international economy.

The ten guardrails

The ten proposed mandatory guardrails from the Australian Government are listed below. They are based on the voluntary safety standards, which are aimed to ensure artificial intelligence is being deployed and developed responsibly. 

  1. Establish, implement, and publish an accountability process, including governance, internal capability, and a strategy for regulatory compliance.

  2. Establish and implement a risk management process to identify and mitigate AI-related risks.

  3. Protect AI systems, and implement data governance measures to manage data quality and provenance.

  4. Test AI models and systems to evaluate model performance and monitor the system once deployed.

  5. Enable measures for human control or intervention in an AI system to achieve meaningful human oversight.

  6. Inform end-users regarding AI-enabled decisions, and interactions with AI and AI-generated content.

  7. Establish processes for people impacted by AI systems to challenge use or outcomes.

  8. Be transparent with other organisations across the AI supply chain about data, models, and systems, used to help them effectively address risks.

  9. Keep and maintain records that allow third parties to assess compliance with guardrails.

  10. Undertake conformity assessments to demonstrate and certify compliance with guardrails.

The NAIC and Australian Government believe these guardrails will create a foundation for safe and responsible AI use. To comply with the proposed AI safety standards, organisations will need to adopt all ten guardrails. 

The guardrails also align with international standards for artificial intelligence risk management frameworks. These include the ISO/IEC 42001:2023 and the U.S. National Institute of Science and Technology’s AI Risk Management Framework 1.0.

Currently, these guardrails are voluntary under the AI Safety Standards. However, the Australian Government has proposed to make them mandatory. They are also going to be updating the guardrails over the next 6 months, based on feedback.

Support independent journalism Subscribe to IA.

 

Related Articles

 

 
Recent articles by Pooja Chauhan
Are you ready to buy your first home? A young Australian's guide

A guide to help you understand what to expect and make sure you’re financially ...  
AI Regulation in Australia: Understanding the mandatory guardrails

Australia takes centre stage as a global leader for safe and responsible AI use.  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation

$

Support IAIndependent Australia

Subscribe to IA and investigate Australia today.

Close Subscribe Donate