Growing Tolerance for AI Errors as Status Quo Shifts
Throughout history, humanity has navigated radical shifts brought about by new inventions, from the printing press to the internet. Each transformative technology has inevitably been met with a degree of skepticism by those living through its emergence. Over the past three decades alone, the internet has profoundly reshaped how we seek, process, and trust information, and more recently, how we engage with artificial intelligence.
Initially, new technologies and methods often face intense scrutiny, with their flaws and errors judged more harshly than established practices. These apprehensions are not without merit; vital debates continue around accountability, ethics, transparency, and fairness in the deployment of AI systems. Yet, a deeper question remains: how much of our aversion stems from the technology itself, and how much is simply the discomfort of departing from the familiar status quo?
This phenomenon, dubbed “algorithm aversion,” describes the tendency to judge an algorithm more severely for making the same mistake a human might. My research in cognitive psychology, conducted with colleagues Jonathan A. Fugelsang and Derek J. Koehler, explores how our evaluation of errors is shaped by context, particularly by what we perceive as the norm. Despite algorithms consistently outperforming humans in various prediction and judgment tasks, a lingering distrust has persisted for decades. This resistance dates back to the 1950s, when psychologist Paul Meehl’s argument that simple statistical models could make more accurate predictions than trained clinicians was met with what Daniel Kahneman later described as “hostility and disbelief.” This early resistance continues to echo in more recent studies demonstrating algorithm aversion.
To investigate this bias, we designed an experiment where participants evaluated mistakes made by either a human or an algorithm. Crucially, before presenting the error, we informed them which option was considered “conventional”—historically dominant, widely used, and typically relied upon in that scenario. In half the trials, humans were framed as the traditional norm; in the other half, algorithms were designated as the conventional agent.
Our findings revealed a significant shift in judgment. When humans were presented as the norm, algorithmic errors were indeed judged more harshly. However, when algorithms were framed as the conventional method, participants became more forgiving of algorithmic mistakes and, strikingly, more critical of humans making the same errors. This suggests that people’s reactions may have less to do with the intrinsic nature of algorithms versus humans, and more to do with whether a method aligns with their mental model of how things are “supposed to be done.” In essence, we exhibit greater tolerance when the source of an error is also the prevailing status quo, and harsher judgment when mistakes originate from what feels new or unconventional.
It is true that explanations for algorithm aversion often resonate intuitively; a human decision-maker, for instance, might grasp real-life nuances that an algorithmic system cannot. But is this aversion solely about the non-human limitations of AI, or is part of the resistance rooted in the broader discomfort of transitioning from one established norm to another? Viewing these questions through the historical lens of human relationships with past technologies compels us to reconsider common assumptions about why algorithms are often met with skepticism and less forgiveness.
Signs of this transition are already pervasive. Debates around AI ethics and accountability have not, for instance, slowed its widespread adoption. For decades, algorithmic technology has been quietly assisting us in navigating traffic, finding partners, detecting fraud, recommending entertainment, and even aiding in medical diagnoses. While numerous studies document algorithm aversion, recent research also points to “algorithm appreciation,” where individuals actively prefer or defer to algorithmic advice in diverse situations. As our reliance on algorithms grows—especially when they prove faster, easier, or demonstrably more reliable—a fundamental shift in how we perceive these technologies, and their inevitable errors, appears unavoidable. This evolution from outright aversion to increasing tolerance suggests that our judgment of mistakes may ultimately depend less on who makes them, and more on what we’ve grown accustomed to.