AI assistance impacts health experts' colonoscopy skills

Ft

A new study published in The Lancet Gastroenterology & Hepatology on August 12, 2025, presents a concerning finding: the routine use of artificial intelligence (AI) assistance during colonoscopies may inadvertently diminish the unassisted skills of experienced health professionals. This research comes amid the rapid and widespread adoption of AI in various medical fields, often hailed for its potential to enhance diagnostic accuracy and patient outcomes.

The observational study, which analyzed over 1,400 non-AI-assisted colonoscopies, revealed a significant decline in the adenoma detection rate (ADR) among endoscopists. ADR, a crucial quality metric in colonoscopy reflecting the rate at which precancerous growths (adenomas) are identified, dropped by 20%—from 28.4% to 22.4%—in procedures performed without AI assistance several months after the technology’s routine introduction. Researchers from the Medical University of Silesia in Poland, who conducted the study, likened this phenomenon to the “Google Maps effect,” where over-reliance on navigation technology can lead to a reduced ability to navigate independently.

This finding introduces a critical paradox. AI-assisted colonoscopy has been widely embraced, with numerous trials demonstrating its effectiveness in increasing overall adenoma detection rates. AI systems are designed to work in real-time, highlighting polyps that human eyes might otherwise miss, thereby improving the quality of a procedure vital for preventing bowel cancer. Indeed, the study itself noted that the overall ADR, including AI-assisted procedures, did see an increase from 22.4% to 25.3% after AI integration, effectively masking the decline in unassisted performance.

However, the study is the first to directly suggest a negative impact of continuous AI exposure on a medical professional’s ability to perform a patient-relevant task without technological aid. Experts have long theorized about the risk of “deskilling” or “automation bias” when humans become overly reliant on automated systems. Dr. Catherine Menon, a Principal Lecturer at the University of Hertfordshire, highlighted that such a deskilling effect could have broader implications across other medical disciplines, potentially leading to poorer patient outcomes if AI support becomes unavailable due to system failures or cyber-attacks. Conversely, Professor Venet Osmani of Queen Mary University of London cautioned that the study’s observational nature means other factors, such as a sharp increase in workload (which nearly doubled after AI introduction in the study), might also contribute to a lower detection rate due to fatigue or reduced time per procedure.

The broader integration of AI into healthcare faces a multitude of challenges beyond potential skill degradation. Medical education currently offers limited exposure to these advanced technologies, leaving many physicians unprepared to effectively incorporate AI into their practice or critically evaluate its suggestions. Issues of data quality, algorithmic bias, and ethical considerations surrounding patient data privacy also persist. For AI systems to be properly integrated into clinical care, specialized training is essential, shifting the focus towards handling intricate health situations and mastering the interpretation of diverse data.

Ultimately, this study serves as a crucial reminder that while AI offers immense potential to revolutionize healthcare, its implementation must be carefully managed. The goal should be to augment, not erode, human expertise, ensuring that medical professionals retain their fundamental skills and critical judgment. Thoughtful integration, coupled with comprehensive training and robust contingency plans for AI unavailability, will be paramount to harnessing the benefits of AI while safeguarding patient care.