Worker oversight key to AI's economic promise

Theguardian

The burgeoning narrative around artificial intelligence often frames it as an inevitable force poised to deliver unprecedented economic prosperity. Yet, a crucial counter-argument suggests that this technological revolution will only yield genuine benefits if it augments human capabilities rather than merely replacing them. This perspective, championed by Nobel laureate Daron Acemoglu, posits that the true measure of AI’s success lies in its ability to empower workers, not just automate their tasks.

While the tech industry and various economic bodies frequently herald AI, particularly advanced systems like large language models and predictive bots, as a rapid pathway to wealth, Acemoglu’s research paints a more nuanced picture. He contends that when new technology simply displaces human labor, it may indeed boost corporate profits but offers minimal broader economic gains. The collective dividend, he argues, is only realized when workers are not just passive users of new tools, but active participants in their design, shaping new connections, markets, and capabilities.

Practical applications of this theory have yielded compelling insights. For instance, testing by the UTS Human Technology Institute revealed that nurses readily adopted AI to streamline their burdensome paperwork but drew a firm line at direct patient intervention. Similarly, retail workers welcomed intelligent inventory systems while striving to preserve the human element in customer relationships. Even public servants, wary of past algorithmic missteps like the Robodebt scandal, sought assurance that AI would not be weaponized against citizens. These examples underscore a consistent theme: workers embrace AI when it eases their burden and enhances their roles, but resist when it threatens their professional autonomy or the quality of human interaction.

To foster an AI ecosystem that truly enriches society, a significant structural reform is needed: the mandated establishment of “worker councils” tasked with overseeing, monitoring, and shaping the introduction of AI technologies. This approach would extend the existing general duty of care employers hold for workplace safety to encompass the deployment of new technologies. To fulfill this expanded duty, employers would be required to genuinely engage their workforce, providing opportunities to test, refine, propose safeguards, and define clear boundaries for AI use. These councils, democratic and representative in nature, would be equipped with the necessary information to understand the technology, the authority to observe its application, and an ongoing role in assessing its impact. In unionized environments, existing consultative processes could be leveraged, while in others, employers or industry bodies would need to establish genuinely accountable frameworks.

Predictably, such proposals often face resistance. Employers may lament increased “red tape” or perceive it as a relinquishing of control. However, actively involving workers is not about conceding power; it is about harnessing invaluable on-the-ground knowledge. The history of technological transformation is rife with failures, not because the technology itself was flawed, but because it was ill-suited to the realities of human work. To assume AI is exempt from this dynamic is a dangerous fallacy propagated by vendors. The tech industry, too, may push back, arguing that any “friction” on change stifles innovation in the global “race to AI.” Yet, given the industry’s track record—from exploitative algorithms to social media platforms that abrogated their responsibilities—there is a palpable public skepticism that demands a more cautious and democratically controlled approach.

Australians, in particular, harbor deep reservations about AI, sensing it is an external force imposed for an ill-defined greater good. Establishing new democratic structures, like AI councils, could provide a much-needed sense of agency, allowing citizens to influence how these tools evolve, how their data—the core resource powering AI—is collected and used, and how they are compensated for its contribution. This is not merely about regulating technology; it is about defining the kind of nation we aspire to be. If AI truly is a transformative, quasi-divine technology capable of delivering economic nirvana, then embedding democratic structures in its development is the surest path to earning the public trust, a trust that has been severely tested by past promises of technological utopia.