AI Firms Seek Copyright Exemption; Australia's Arts Minister Says No

Theconversation

The debate surrounding artificial intelligence and intellectual property rights is intensifying, with Australia’s Arts Minister Tony Burke recently asserting a firm stance against weakening copyright laws. His comments directly address a contentious proposal from the Productivity Commission, which suggested a text and data mining exception to the Australian Copyright Act. Such an exception would permit AI large language models, like ChatGPT, to be trained on copyrighted Australian works without requiring explicit permission or payment.

This proposal has ignited fierce opposition from creators. Songwriter and former arts minister Peter Garrett vehemently criticized what he termed the “rampant opportunism of big tech,” accusing them of seeking to “pillage other people’s work for their own profit.” Garrett has urged the federal government to bolster copyright laws, emphasizing the need to safeguard cultural sovereignty and intellectual property against powerful corporate interests aiming to exploit creative works without compensation.

Globally, leading AI companies are actively lobbying for copyright exemptions. In the United States, former President Donald Trump, in launching his government’s AI Action Plan, questioned the viability of AI development if every piece of training data required payment. Major tech giants, including Google and Microsoft, have echoed these sentiments in their discussions with the Australian government. Australian tech billionaire Scott Farquhar, co-founder of Atlassian and chair of the Tech Council of Australia, publicly advocated for a text and data mining exception, arguing that current copyright laws are “outdated” and hinder AI innovation.

At the heart of this conflict lies a fundamental question: what constitutes authorship in the age of AI? Historically, figures like 19th-century English poet Samuel Taylor Coleridge elevated the author to a divinely inspired creator, whose original works reflected unique genius. However, mid-20th-century theorists like Roland Barthes, in his essay “The Death of the Author,” proposed that language itself generates new works, with authors merely acting as “scriptors” who weave together pre-existing linguistic elements—a concept ironically prescient of AI’s text-generating capabilities. Yet, as Minister Burke reflected, for most readers, the interaction remains “very much with the author,” seeking human truths and reflections.

The digital revolution has already demonstrated the profound economic impact of shifting wealth in the creative industries. From 1999 to 2014, global music industry revenue plummeted from US$39 billion to $15 billion due to online piracy. Conversely, online platforms and tech companies profited immensely, with Google’s annual revenue soaring from US$0.4 billion in 2002 to $74.5 billion in 2015, often benefiting from traffic to sites offering pirated content. Today, a new wave of legal challenges is emerging, with authors and publishers filing lawsuits against AI companies for the unauthorized use of books to train large language models. While some initial rulings, such as a US federal judge’s decision that Anthropic did not breach copyright by using books to train its model, compare the process to a “reader aspiring to be a writer,” the broader legal landscape remains uncertain. Some copyright reformers even propose that AI-generated works should be protected by copyright, elevating AI to the same legal status as human authors, arguing that rejecting this view exhibits an “anthropocentric” bias.

However, many argue that AI-generated content, despite its technical sophistication, lacks a crucial element: emotion. Human creativity springs from a lifetime of experiences—joy, grief, and everything in between—and engages audiences on a deeply emotional level, a capacity that AI models currently lack. Instances of low-quality, AI-generated non-fiction books appearing on platforms like Amazon, often without a human author byline, underscore this concern. Publishers and platforms profit from these sales, while no royalties are paid to a human creator. In Hollywood, the issue has been described as “erasure disguised as efficiency,” with producers encountering AI-generated scripts requiring human rewrites. In response, organizations like the US Authors Guild have introduced certification systems to distinguish human-written works, and the European Writers Council has called for clear transparency obligations for AI-generated products.

As readers continue to flock to writers’ festivals and human authors remain celebrated cultural heroes, their status is increasingly threatened by AI’s pervasive reach. The ongoing struggle is not merely about financial compensation but about preserving the very essence of human creativity and preventing authors from becoming unwitting data donors to AI systems. The creative community is determined to resist this ambition from big tech.