The consensus among experts and pundits on artificial intelligence (AI) technologies such as ChatGPT seems to be that it’s like old age — inevitable, so get used to it.
When the man who heads up CSIRO’s Data61 spoke to Creative Victoria’s podcast team recently about “our AI journey” the audience – presumably writers, artists, musicians and other old-fashioned creatives – was treated to a few examples. Poor old Harrison Ford de-aged onscreen for yet another go at Indiana Jones. A building design app that you can also use to build things with Lego.
As for music, Data61s Jon Whittle said we’ve got not only AI impressing judges of the Eurovision Song Contest but also cases of increased “productivity”.
For writing, people are “having great fun” generating whole books using these powerful new AI tools, although, Whittle warned, “the outputs are often quite mediocre”.
Not that mediocrity is necessarily a handicap for publishing, but Whittle’s AI examples and his assertion that “the key... is to have good governance mechanisms” might not have settled the nerves of his audience. Right now, in Australia, we apparently only have “ethics principles” to control how AI can and should be used. These principles are voluntary, which is perhaps why Nine News decided it was ok to digitally “edit” an image of MP Georgie Purcell to give us a glimpse of her artificial midriff.
Back in August last year, a government inquiry received over 500 submissions about AI technologies which the Media, Entertainment and Arts Alliance (MEAA) said threatened the livelihood of creators. The Australian Chamber of Commerce and Industry decided we should wait and see before legislating. (It’s a vibe thing — no treaty, no republic, no climate action, no no no, just wait and see.)
Last month, an interim report was released by the Department of Industry, noting that Australia had signed up for an international collaboration on AI safety testing. It also noted the dodginess of ‘voluntary guardrails’ and, echoing the view of Whittle at the CSIRO, is now suggesting ‘mandatory safety guardrails’ (is that a legal thing, like an actual law?). These will be applied in ‘high-risk settings in close consultation with industry and the community’.
That’d be “industry” – which has financial investment in all this – and “the community”, which finds all this technical talk a bit out of reach but is signing on in droves to have fun writing a whole book with a machine.
The report concludes that, while the Government is ‘considering the right regulatory approach’ to those metaphoric safety guardrails, they’re keen to ensure ‘the use of AI in low-risk settings can continue to flourish largely unimpeded’.
Low-risk is, it seems, anything that makes life easier, that speeds up mundane tasks, or that increases productivity — like writing a novel, or, say, a way to collate information across systems and compute a debt, send a payment demand and cut off benefits all without the interference of people.
The risk we are speaking about here appears to be legal, political and financial, possibly ethical at a pinch, but probably not qualitative. For in a neoliberal world, there is no distinction between quality and cost, so what is expensive or sells a lot is what is good or valued. This simplifies the messy business of artistic worth.
In the visual arts, where quality and artistic value have undergone such unfathomable revisions, the ground is well prepared for the flourishing of artworks entirely created by a machine. The Brisbane Portrait Prize now accepts entries that were ‘completed in whole or in part by generative artificial intelligence (AI)’, so long as it’s original and entirely completed and owned outright by the artist.
It gets tricky and nuanced when you ask, yes, but how did the AI learn to generate the art? How many images of works by Botticelli, Cézanne, Bourgeois and Kngwarreye are fed into the machine before it’s declared ready to respond to the artist’s ideas? The Brisbane Portrait Prize’s attempt to define originality by referring to outright ownership (copyright) could well face legal challenges if Australia follows the United States, where a court recently ruled that AI-generated art cannot be copyrighted.
The publishing copyright issue is being tested in the U.S., with sufficiently high-profile writers involved to make even the most docile media outlets want to report on it. The Australian response from agencies such as the Publishers Association is a stern statement urging the Government to take into account some ‘core principles’, such as ethical frameworks and transparency.
More detailed and complex analyses of the critical stoushes working their way through courts, particularly in the U.S., all pretty much end with a similar conclusion: the bad will be outweighed by the good. At The Conversation, there is a steady stream of very clear academic articles, including by Jon Whittle, that have a go at how the lust for innovation – and money – is crashing against the artificial reefs of ethical defences.
The problem is, I think, that discussions about ethics, art and quality, which have never been of much interest in Australian culture, are now considered too difficult or irrelevant or sort of offensive to that weird mythical character much beloved by mainstream media and the political class — ordinary Australians.
When Creative Victoria aims to help creators understand what AI might mean for arts and culture, and sensibly call on one of the people best-placed to explain, that the result was a banal trivialising of what lies ahead is perhaps not Jon Whittle’s fault. There’s just not much appetite, even in our cultural sector, for the crunchy conversations. We’re a bland mob.
Rosemary Sorensen was a newspaper books and arts journalist based in Melbourne, then Brisbane, before moving to regional Victoria where she founded Bendigo Writers Festival, which she directed for 13 years.
Related Articles
- Artificial intelligence could hold the key to our survival
- Sunak's AI summit: Writing the rules on AI regulation
- AI recipe generators a recipe for disaster
- EU leads the way in regulating AI
- AI: Nothing artificial about Google's ethical conflicts
Support independent journalism Subscribe to IA.