AI's Looming Crisis: Journalism's Crumble Threatens Its Foundation
Artificial intelligence is rapidly assimilating vast quantities of journalistic content, much like humans do: to construct a comprehensive understanding of the world, to cultivate critical thinking, to distinguish truth from falsehood, to refine writing abilities, and to condense history and context into accessible forms. Yet, a critical question looms: what becomes of AI when the very journalistic institutions it relies upon begin to falter? On what foundation of verified truth will AI answer complex questions, draft communications, or assist with intricate tasks, if that foundation crumbles? While the warning bells for journalism have rung for decades, the profound shift in how we discover information, often termed the “end of search,” now signals a potentially fatal blow. This evolving landscape poses a significant challenge for AI, and for humanity as we strive to navigate an increasingly convoluted world.
In our haste to integrate generative AI into nearly every facet of our lives, we have largely overlooked a fundamental premise: AI cannot function effectively without a reliable baseline of verified facts. Currently, this essential factual bedrock is meticulously constructed and maintained by what is often referred to as “traditional” journalism—the kind underpinned by rigorous fact-checking and editorial oversight. As AI continues to disrupt established paradigms of information discovery, media monetization, and news consumption behaviors, it inadvertently undercuts the very industry that supplies it with the verified facts it so critically depends upon. Just as a democratic society struggles without objective journalism, so too does artificial intelligence.
Recent research from Apple underscores this vulnerability, observing that “It doesn’t take much to cause generative AI to fall into ‘complete accuracy collapse.’” The study further indicates that generative AI models frequently lack robust logical reasoning capabilities, often failing to operate effectively beyond a certain threshold of complexity. One might consider, for instance, a detailed analytical piece, such as Andrew Marantz’s exploration of autocracy in The New Yorker, which masterfully weaves together disparate historical threads to illuminate contemporary events. It is difficult to imagine AI replicating such nuanced insight, potentially short-circuiting before it could distill the salient, impactful points that define such profound human analysis. When pressed to “think” too deeply, AI has demonstrated a propensity to break down.
Even more concerning findings emerge from a BBC report, which concluded that AI struggles to accurately summarize news content. In an experiment, ChatGPT, Copilot, Gemini, and Perplexity were tasked with summarizing 100 news stories, with expert journalists subsequently evaluating each output. The report revealed that, beyond containing outright factual inaccuracies, the chatbots frequently “struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context.” Alarmingly, almost one-fifth—a full 19%—of these summaries contained false facts or distorted quotations.
Further studies corroborate these systemic issues. Research from MIT Sloan has highlighted AI tools’ tendency to fabricate citations and reinforce existing gender and racial biases. Concurrently, analysis in Fast Company suggests that the “good enough” standard often accepted for AI-driven journalism is tolerated primarily due to the revenue these tools generate.
This brings us to the less altruistic reason for AI’s consumption of journalistic content: financial gain. Crucially, none of the substantial revenue generated by AI is currently being reinvested into the journalistic institutions that underpin this entire technological experiment. What then becomes of our society when the core pillar of a free and accurate press collapses under the weight of a technology that has unwittingly consumed and undermined it? Those guiding the development of AI must recognize and actively support the intrinsic value of fact-checked reporting, now more than ever, to ensure its continued existence and, by extension, the reliability of AI itself.