AI reliance causes doctor 'deskilling,' study warns
Artificial intelligence is rapidly transforming healthcare, offering the promise of enhanced diagnostic accuracy and improved patient outcomes. Yet, new research suggests a concerning paradox: while AI tools can significantly boost performance in the short term, over-reliance might subtly degrade human expertise, potentially leaving professionals less capable when technology is absent.
A recent study published in The Lancet Gastroenterology & Hepatology illuminates this dynamic within endoscopy. The research focused on the use of AI image recognition technology to identify and remove precancerous growths, known as adenomas, during colonoscopies. Initial findings confirmed AI’s benefit, showing a 12.5 percent increase in adenoma detection rates (ADR) when the technology was employed—a development expected to save lives. However, the study then explored what happened when endoscopists, previously accustomed to AI assistance, performed colonoscopies without the tool.
The results were striking. Based on data from four endoscopy centers in Poland between September 2021 and March 2022, the analysis compared standard, non-AI assisted colonoscopy ADRs before and after doctors gained exposure to AI in their clinics. The study found that the adenoma detection rate for standard colonoscopies significantly decreased from 28.4 percent before AI exposure to 22.4 percent afterward, representing an absolute drop of 6.0 percent. This led the authors to conclude that “continuous exposure to AI might reduce the ADR of standard non-AI assisted colonoscopy, suggesting a negative effect on endoscopist behaviour.”
This finding echoes warnings issued by professional bodies years ago. In 2019, the European Society of Gastrointestinal Endoscopy (ESGE) cautioned in its AI guidelines about the risk of “deskilling” and “over-reliance on artificial intelligence” as significant concerns during implementation. The authors of The Lancet paper believe their study is the first to directly observe the effect of continuous AI exposure on clinical outcomes, and they hope it will prompt further essential research into AI’s broader impact on healthcare professionals.
The phenomenon of “deskilling” due to automation is not new. Decades ago, psychologist Lisanne Bainbridge explored this concept in her 1983 work, “Ironies of Automation,” noting how automating industrial processes could inadvertently create new problems for human operators rather than simply solving old ones. More recently, researchers from Purdue University have applied this principle to modern contexts, suggesting that designers who become overly reliant on AI might also experience hindered skill development.
This concern extends beyond medical and design fields. In June, MIT researchers published a related study linking the use of Large Language Model (LLM) chatbots to lower brain activity, hinting at a potential cognitive cost of excessive delegation to AI. Princeton University computer scientist Arvind Narayanan has also voiced apprehension about developer deskilling. He distinguishes this from earlier fears that compilers would eliminate the need for programmers to understand machine code, which never materialized. Instead, Narayanan worries about a scenario where a junior developer relies so heavily on AI for “vibe coding” that they lose the fundamental ability to program independently, lacking a grasp of core programming principles.
While AI promises unprecedented efficiency and capability, these studies collectively underscore a critical challenge: integrating advanced technology in a way that truly augments, rather than inadvertently diminishes, human skill and critical thinking. The ongoing research highlights the need for a nuanced approach to AI adoption, one that carefully balances technological assistance with the imperative to maintain and cultivate human expertise.