AI Generates Biased Australian Stereotypes, New Research Finds
Generative artificial intelligence (AI) is often touted as a revolutionary force, promising to reshape our future with its intelligence and creativity. However, new research from Curtin University, published by Oxford University Press and detailed by Tama Leaver and Suzanne Srdarov for The Conversation, directly challenges this optimistic view, revealing a troubling undercurrent of bias in how these tools depict Australian themes.
The research, conducted in May 2024, set out to understand how generative AI visualises Australia and its people. Researchers posed 55 distinct text prompts to five leading image-generating AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI, and Midjourney. The prompts were intentionally brief to uncover the AI’s default assumptions. From these queries, approximately 700 images were collected. Notably, some prompts, particularly those involving “child” or “children,” were refused entirely, indicating a risk-averse stance by some AI providers.
The findings painted a consistent picture: AI-generated images of Australia often reverted to tired, monocultural tropes. The visual narrative frequently evoked an imagined past, dominated by red dirt, Uluru, the outback, untamed wildlife, and sun-kissed “bronzed Aussies” on beaches.
A closer examination of images depicting Australian families and childhoods revealed deeply ingrained biases. The “idealised Australian family,” according to these AI tools, was overwhelmingly white, suburban, and adhered to a traditional, heteronormative structure, firmly rooted in a settler-colonial past. Prompts for an “Australian mother” typically yielded images of white, blonde women in neutral settings, peacefully holding babies. The sole exception was Adobe Firefly, which consistently produced images of Asian women, often outside domestic settings and sometimes with no clear connection to motherhood. Crucially, none of the tools depicted First Nations Australian mothers unless explicitly prompted, suggesting whiteness as the AI’s default for motherhood in Australia.
Similarly, “Australian fathers” were consistently white and often portrayed outdoors, engaging in physical activity with children. Curiously, some images depicted fathers holding wildlife instead of children, including one instance of a father with an iguana – an animal not native to Australia. Such “glitches” point to problematic or miscategorised data within the AI’s training sets.
Perhaps most alarming were the results for prompts related to Aboriginal Australians. Requests for visual data of Indigenous people frequently surfaced concerning, regressive stereotypes, including “wild,” “uncivilised,” and even “hostile native” caricatures. The researchers opted not to publish images generated for “typical Aboriginal Australian families” due to their problematic racial biases and the potential reliance on data or imagery of deceased individuals, which rightfully belongs to First Nations people.
Racial stereotyping also manifested acutely in depictions of housing. A prompt for an “Australian’s house” consistently generated images of suburban brick homes with well-kept gardens, swimming pools, and green lawns. In stark contrast, an “Aboriginal Australian’s house” would produce a grass-roofed hut on red dirt, adorned with “Aboriginal-style” art motifs and featuring an outdoor fire pit. These striking differences were consistent across all tested image generators, highlighting a profound lack of respect for Indigenous Data Sovereignty – the right of Aboriginal and Torres Strait Islander peoples to own and control their own data.
Even with recent updates to underlying AI models, including OpenAI’s GPT-5 released in August 2025, the problem persists. A prompt to ChatGPT5 for an “Australian’s house” yielded a photorealistic suburban home, while an “Aboriginal Australian’s house” produced a more cartoonish hut in the outback, complete with a fire and dot painting imagery in the sky. These recent results underscore the enduring nature of these biases.
Given the pervasive integration of generative AI tools into social media, mobile phones, educational platforms, and widely used software like Microsoft Office, Photoshop, and Canva, their capacity to produce content riddled with inaccurate and harmful stereotypes is deeply concerning. The research suggests that reducing cultures to cliches might not be an accidental “bug” but rather an inherent “feature” of how these AI systems are trained on tagged data, leading to reductive, sexist, and racist visualisations of Australians.