Technology Analysis

EU leads the way in regulating AI

By | | comments |
The European Union is currently drafting an AI Act to regulate its usage globally (Image by Dan Jensen)

The European Union is drafting international guidelines to eliminate the dangers and risks involved with AI, writes Paul Budde.

*Also listen to the audio version of this article on Spotify HERE.

IT IS CLEAR that if we want to use artificial intelligence (AI) for the good of society, we need to start providing guidelines, regulations and most likely legislation around it. The industry has been talking about this for many years, but governments have been slow to react.

In general, governments are reactive rather than proactive when it comes to regulations and in many cases, that is indeed the best way to foster innovation. However, there are some existential issues with AI that require a swifter reaction. We have seen the negative impacts of social media. We cannot simply let technology – driven by commercial organisations – dictate how we use AI. These organisations are here to make a profit and are not here to safeguard the common good. So we need to act now; there is simply too much at stake.

As this is a global issue, we need to cooperate to find the best way forward. While most of these developments are coming out of the United States, at the same time, that country favours the minimum level of regulations, so we cannot expect much social leadership from it.

Over the last few years, the European Union (EU) has proven to be more effective as a regulator. The EU has taken on social media with hefty fines in those cases where they have trespassed European law. They are also the first to start working on international guidelines for AI.

In my opinion, it makes sense for countries such as Australia, New Zealand, Canada, Japan, Korea and others that have similar values to look at the EU model and, if possible, join them or work with them, rather than trying to invent their own set of guidelines. As mentioned, this is a global issue and requires a global approach.

Below is a summary of an article written by my European colleague, J Scott Marcus:

Adapting the European Union AI Act to deal with generative artificial intelligence

Generative artificial intelligence (AI) and the Foundation Models (FMs) on which it relies are a rapidly developing field with the potential to be both beneficial and harmful. Generative AI models can be used to create realistic and convincing text, images and videos. This has a wide range of potential applications, such as in the creation of art, entertainment and education.

However, generative AI also has the potential to be used for malicious purposes, such as the creation of disinformation, the spread of hate speech and the exploitation of vulnerable individuals.

The EU is currently in the process of drafting an AI Act, which would set out a regulatory framework for AI in the EU. The draft AI Act, as amended by the European Parliament, does not adequately distinguish between different types of generative AI models, and it does not differentiate the monitoring and compliance requirements imposed on different providers of generative AI models.

This article argues that the EU should amend the draft AI Act before enactment to regulate foundation models and generative AI in a way that better balances the need to protect the public with the need to promote innovation and productivity.

The proposed amendments would include:

  • a more nuanced approach to different foundation models and generative AI;
  • a re-thinking of the provisions on the use of copyrighted data for training purposes, and a reflection as to whether they belong in this legislation at all; and
  • a mandatory incident reporting procedure as part of the quality control framework.

The article also re-emphasises the importance of good cybersecurity as regards foundation models and generative AI.

A nuanced approach to different foundation models and generative AI

The current legislation seems to do a reasonably good job of protecting the public against harm, but treating both large and small foundation model providers exactly the same risks impeding innovation by consolidating the market dominance of firms that already have a considerable lead in FMs. Larger firms are likely to be systemically more important and also to be better able to afford regulatory compliance.

At the same time, even small firms might produce FMs that work their way into applications and products that reflect high-risk uses of AI, so they cannot get a free pass. The principles of risk identification, testing and documentation should therefore apply to all FM providers, including non-systemic foundation models, but the rigour of testing and verification could be different.

Exactly how to implement this differentiation is likely to require guidance, probably from the European Commission. As for the identification of foundation models that are so important as to require the most intensive possible monitoring, this would benefit from internationally agreed frameworks, technical standards and benchmarks.

Re-thinking the provisions on the use of copyrighted data for training purposes

EU copyright law as revised in 2019 already provides for an exception from copyright for text and data mining. Conditions under which royalties must be paid have also been modernised in the revised copyright law. The AI Act as amended by the European Parliament nonetheless require providers of generative AI to publicly document the use of any copyrighted material in training data.

If the use is explicitly permitted by copyright law, one must wonder whether the burdensome task of maintaining a directory adds any value. Aside from that, there is no obvious reason for treating the use of copyrighted material differently for generative AI than for other online use, which begs the question of why changes, if they were needed at all, are proposed to be made here rather than as an amendment to EU copyright law.

A mandatory incident reporting procedure as part of the quality control framework

The current text of the AI Act requires any provider of a foundation model to provide a quality control system, but says nothing about what that entails. The article suggests that there be a mandatory incident reporting procedure for foundation models. This would help to ensure that the risks posed by generative AI models are identified and addressed in a timely manner.

Requirements for safety and security

The article emphasises the importance of providers of generative AI investing in safety and security. This helps to protect users from the risks of malicious attacks on generative AI models.

In conclusion, foundation models and the generative AI that they enable are powerful technologies with the potential to be both beneficial and harmful. The EU should revise the draft AI Act before enactment to help ensure that the benefits of foundation models and generative AI are realised while the risks, including risks to productivity and innovation, are mitigated. These amendments would help to create a regulatory framework that is fit for purpose in the age of generative AI.

Read the full article here.

*This article is also available on audio here:

Paul Budde is an Independent Australia columnist and managing director of Paul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter @PaulBudde.

Related Articles

Support independent journalism Subscribe to IA.

Recent articles by Paul Budde
Deloitte predicts significant technology transformation for 2024

In the ever-evolving landscape of technology, media and telecom (TMT), the Austr ...  
Online scams robbing Australians of billions

Australians lost a record-breaking $3.1 billion to scams last year — an 80% ...  
Mind-boggling advances in mind reading neuroscience

Rapid breakthroughs in neuroscience have led to advancements in the ability to ...  
Join the conversation
comments powered by Disqus

Support Fearless Journalism

If you got something from this article, please consider making a one-off donation to support fearless journalism.

Single Donation


Support IAIndependent Australia

Subscribe to IA and investigate Australia today.

Close Subscribe Donate