Sponsored Sponsored

AI ethics & governance: What we can expect from AI policymaking

| | comments |

Artificial intelligence is advancing more quickly than most laws – or people who implement them – can keep up.

It’s already shaping the way we live and work through things like automated job screening tools or image generative models that are indistinguishable from human creations.

In Australia, the conversation about how artificial intelligence ought to be regulated has made the leap from internet echo chambers on tech forums to discussions in Parliament, as leaders work to strike a balance between innovation and responsibility.

AI governance is not just about limits. It’s defending people’s rights, ensuring data use is fair and transparent, and supporting new technology that we can trust. So, what next? 

Here’s what we can expect as Australia and the rest of the world begin to implement real rules around artificial intelligence.

1. Recognising the different types of AI and their risks

Before we can talk about policy, we need to talk about what’s actually being regulated. There are many different types of AI, each with its own strengths and weaknesses. Some are entirely functional, as with navigation tools or chatbots. Others that are able to write, create or code pose further ethical issues. Policymakers are beginning to map out these categories, similar to how Europe’s AI Act separates low-risk from high-risk systems.

The goal isn’t to slow progress down or halt it. It’s just about setting reasonable expectations. When developers and users know what rules apply to which types of systems, innovation can continue to thrive without risking the public.

2. Who’s responsible when AI inevitably gets it wrong?

Accountability is one of the hardest parts of managing AI. When something goes wrong, whether it's misguided medical advice, a recruiting system that favours one group over another, or deepfakes that spread false information, someone has to be held accountable. The problem is that the chain of accountability isn’t always clear-cut. 

That uncertainty is what the Australian Government’s Safe and Responsible AI agenda aims to address, and the interim response released in early 2024 sketched out what those guardrails could resemble. The basic principle is simple: People, not algorithms, are responsible.

Here’s a recent example of why this matters: A report that Deloitte created for the Australian government with the help of AI was found to include fictitious sources and inaccurate information. It is precisely that kind of failure “safe and responsible AI” seeks to avoid. It’s a reminder to everyone that no matter how advanced your system, there should be humans involved in moderating and vetting the data.

3. Making sure AI stays fair

Have you ever generated a result with AI, only to realise that what you’re getting seems biased or slightly skewed? These systems are biased because they’re built on data generated by people, and humans come with their own biases and blind spots. 

With that in mind, some regulators in Australia are already stepping up to make AI more fair and responsible. The Pilot Assurance Framework outlines in clear terms the various steps for testing systems and documenting decision-making. The goal isn’t to make AI perfect (that’s impossible) but to make sure it’s accountable. Developers are being urged to check for skewed results early, to work with more diverse datasets, and to include a focus on fairness as part of the design process, not an afterthought.

It’s really about honesty. The answer isn’t to pretend that bias doesn’t exist, but rather face it and address it where we can. When people understand how an AI system makes its decisions (and know the individuals behind it are working to ensure fairness), they’re much more likely to trust the technology. 

4. Being upfront about AI use

When we engage with technology that makes decisions for us, we deserve to know what’s going on. Whether it’s a chatbot answering a health question or a program scanning job applications, people should be told when AI is involved and how their details are being handled. A quick note like “Created with the help of AI” can bring that relationship out into the open and make it feel respectful rather than secretive or sneaky.

And then there’s the issue of consent — a topic that is increasingly on the agenda with Australia’s emerging “Safe and Responsible AI” framework. Giving people the right to see, delete, or opt out of how their information is used helps build trust. It’s a statement that technology isn’t taking over. 

In the long run, being clear and upfront isn’t just a legal box to tick. It’s what makes new tools something people are comfortable using, and it ensures innovation builds on trust rather than confusion. 

5. Creating a homegrown ethical AI industry

Australia’s AI industry is still in its infancy, but it’s starting to come into its own. What makes it stand out is the focus on doing things responsibly from the beginning. Rather than chasing fads or racing to produce the largest models, there is a growing belief in focusing on making Australian-developed AI fair, dependable and built with integrity. 

Organisations such as the National AI Centre (NAIC) and CSIRO’s Data61 department are working towards pushing in that direction. They’re bringing together businesses, researchers and government agencies to grow local capability while keeping transparency and trust front and centre. A big part of that is helping smaller Australian developers get access to better data and training resources, so they can compete fairly without taking shortcuts.

We’re also hearing a lot more about the benefits of drawing on diverse, local datasets that truly represent our population as opposed to falling back on global models, which might be off target. The ultimate aim here isn’t just to make AI that works — it’s to make AI reflect who we are. If Australia continues to lead development with that kind of attitude, then we’ll have world-class tech that is truly human-centred. 

AI ethics and governance are starting to feel real now. They are influencing how Australia creates and adopts new technology, not only in the laboratory but also across everyday life. The objective, in essence, is to keep innovation moving ahead while ensuring that people remain protected and informed.

What’s encouraging is how much more thoughtful the conversation has become. Businesses are looking beyond quick wins. Academics and researchers are emphasising fairness, safety and transparency. More everyday Aussies, too, are thinking about where their data goes and how these systems impact them.

If we can sustain that growing awareness, Australia has a real opportunity to show the world what responsible AI looks like. The future of this technology doesn’t have to feel distant or complicated. It can be something people actually understand and trust.

 
Recent articles by
The true cost of road accidents beyond the hospital bill

When most Australians think about the cost of a car accident, they picture ambul ...  
AI ethics & governance: What we can expect from AI policymaking

Artificial intelligence is advancing more quickly than most laws – or people who ...  
A homeowner’s guide to plumbers servicing all needs

Homeowners often feel frustrated due to plumbing issues. From a dripping tap to a ...  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation

$

Support IAIndependent Australia

Subscribe to IA and investigate Australia today.

Close Subscribe Donate