Secret AI use at work: Australia needs clear rules to curb 'shadow AI'

Theconversation

A significant portion of Australian workers are secretly employing generative artificial intelligence (Gen AI) tools in their jobs, often without the knowledge or explicit approval of their employers, according to a recent federal government report. The “Our Gen AI Transition: Implications for Work and Skills” report, published by Jobs and Skills Australia, cites multiple studies indicating that between 21% and 27% of employees, particularly in white-collar sectors, are using AI behind their managers’ backs.

This clandestine adoption presents a striking paradox. While the federal treasurer and the Productivity Commission actively encourage Australians to embrace AI’s potential, many workers feel compelled to conceal their use. Common reasons cited in the report include a perception that using AI constitutes “cheating,” alongside a fear of being perceived as lazy or less competent.

The rise of this unapproved “shadow use” of AI highlights significant gaps in the current governance of AI in Australian workplaces, leaving both employees and employers uncertain about appropriate conduct. While such worker-led experimentation can sometimes serve as a hidden driver of bottom-up innovation, particularly in sectors where early adopters emerge as unofficial leaders, it also introduces substantial risks. The report warns that without clear governance, this informal experimentation, though a source of innovation, can fragment practices, making them difficult to scale or integrate later. Crucially, it escalates concerns around data security, accountability, compliance, and the potential for inconsistent outcomes.

Indeed, real-world examples underscore the potential for serious failures arising from unregulated AI use. In Victoria, a stark incident saw a child protection worker input sensitive case details concerning sexual offences against a young child into ChatGPT. This led the Victorian information commissioner to impose a ban on the state’s child protection staff using AI tools until November 2026. Lawyers, too, have faced scrutiny for AI misuse, with documented cases emerging from the United States, the United Kingdom, and Australia, including a recent report involving misleading information generated by AI for a murder case in Melbourne. Yet, even within the legal profession, rules remain patchy and diverge significantly across states. While a lawyer in New South Wales is now explicitly prohibited from using AI to generate or alter affidavit content, other states and territories have not adopted such clear positions, leaving a fragmented regulatory landscape even for professions with critical ethical obligations.

This inconsistency underscores the report’s urgent call for national stewardship of Australia’s transition to generative AI. It advocates for a coordinated national framework, centralised capability, and a nationwide boost in digital and AI skills. This aligns with broader research indicating that Australia’s current AI legal framework contains significant blind spots, necessitating a fundamental rethink of our knowledge systems. In the absence of such a comprehensive national policy, employers are left to navigate a fragmented and often contradictory regulatory environment, increasing the risk of breaches. While national uniform legislation for AI would offer a consistent approach, mirroring the borderless nature of the technology itself, its implementation currently appears unlikely.

Given this complex landscape, employers seeking to curb secret AI use must proactively establish clearer policies and provide comprehensive training. The emerging, albeit imperfect, written guidance within some legal jurisdictions serves as a foundational step. However, the ultimate solution lies in more robust, proactive national AI governance. This would entail clear policies, ethical guidelines, risk assessments, and compliance monitoring, providing much-needed clarity for both workers and employers. Without this unified approach, the very employees who could drive Australia’s AI transformation may continue to operate in the shadows, burdened by the fear of being misjudged as lazy or dishonest, rather than empowered as innovators.