AI could worsen racism, sexism in Australia, commissioner warns
The rapid integration of artificial intelligence into Australian society, while promising significant productivity gains, carries a grave risk of exacerbating existing social inequalities, according to a stark warning from the nation’s Human Rights Commissioner. As the federal government prepares to discuss AI’s economic potential at an upcoming summit, a growing chorus of voices, including unions and industry bodies, is raising alarms about the technology’s potential to entrench racism and sexism if left unregulated.
Human Rights Commissioner Lorraine Finlay has cautioned that the pursuit of economic benefits from AI must not come at the cost of increased discrimination. She highlights a critical lack of transparency in the datasets used to train AI tools, making it difficult to identify and mitigate inherent biases. Finlay explained that “algorithmic bias” means unfairness is built directly into the technology, leading to biased outcomes. This is compounded by “automation bias,” where humans increasingly rely on machine decisions, potentially overlooking or even reinforcing discriminatory patterns without conscious awareness. The commission has consistently advocated for a dedicated AI Act, alongside strengthening existing legislation like the Privacy Act, and implementing rigorous testing for bias in AI systems. Finlay urged the government to establish new legislative safeguards promptly, emphasizing the necessity of bias testing, auditing, and robust human oversight.
The commissioner’s concerns emerge amid an internal debate within the Labor party on the optimal approach to AI governance. Senator Michelle Ananda-Rajah, a former medical doctor and AI researcher, has notably diverged from some party lines, proposing that all Australian data should be “freed” to tech companies. Her rationale is to train AI models on diverse local data, thereby preventing the perpetuation of overseas biases and better reflecting Australian life and culture. While Ananda-Rajah opposes a dedicated AI Act, she firmly believes content creators must be compensated for their work used in AI training. She argues that without opening up domestic data, Australia risks perpetually “renting” AI models from international tech giants, lacking oversight or insight into their functioning.
Evidence of AI bias is already mounting, both domestically and internationally. Studies have revealed discriminatory outcomes in critical areas such as medicine and job recruitment. For instance, an Australian study published in May found that job candidates interviewed by AI recruiters faced potential discrimination based on their accent or if they lived with a disability. Ananda-Rajah cited skin cancer screening as another example where algorithmic bias in AI tools could lead to unequal patient treatment, stressing the need for training models on comprehensive, diverse Australian data while safeguarding sensitive information.
While the “freeing” of data is seen by some as part of the solution, other experts underscore the need for a multi-pronged approach. Judith Bishop, an AI expert at La Trobe University, acknowledges that more Australian data could improve AI training, warning against over-reliance on US models trained on foreign datasets. However, she stresses that this is only one component of a broader solution. Similarly, eSafety Commissioner Julie Inman Grant has voiced concerns over the lack of transparency in AI training data. She calls for tech companies to be transparent about their data sources, develop robust reporting tools, and ensure their products utilize diverse, accurate, and representative data. Inman Grant highlighted the “opacity of generative AI development” as deeply problematic, raising fears that large language models could “amplify, even accelerate, harmful biases – including narrow or harmful gender norms and racial prejudices,” especially given the concentration of development in a few companies, risking the sidelining of certain voices and perspectives.
The overarching sentiment among these experts is that while AI offers immense potential, its development and deployment in Australia demand urgent regulatory attention, a commitment to data diversity, and unwavering transparency to ensure it serves all Australians fairly and equitably, rather than deepening existing divides. The ongoing discussions at the federal economic summit and within political circles reflect the growing recognition that navigating AI’s future requires a careful balance between innovation and ethical responsibility, particularly concerning intellectual property and privacy protections, which media and arts groups fear are under threat from “rampant theft.”