AI Health Advice Leads to Rare Medical Condition

Theguardian

A recent case detailed in the Annals of Internal Medicine has cast a stark warning over the burgeoning role of artificial intelligence in personal health, after a 60-year-old man developed a rare and serious condition following a misguided consultation with an earlier version of ChatGPT. The incident underscores growing concerns among medical professionals about the potential for AI chatbots to disseminate inaccurate or dangerous health advice.

According to the report from researchers at the University of Washington in Seattle, the patient presented with bromism, also known as bromide toxicity. This condition, once a significant public health issue in the early 20th century, was historically linked to nearly one in ten psychiatric admissions. The man told doctors that after reading about the adverse effects of sodium chloride, or common table salt, he turned to ChatGPT seeking ways to eliminate chloride from his diet. His subsequent actions involved taking sodium bromide for three months, despite having read that “chloride can be swapped with bromide, though likely for other purposes, such as cleaning.” Sodium bromide itself was historically used as a sedative.

The authors of the Annals of Internal Medicine article highlighted this case as a potent example of how AI use “can potentially contribute to the development of preventable adverse health outcomes.” While they could not access the man’s specific conversation log with the chatbot, they conducted their own test. When asking ChatGPT what chloride could be replaced with, the AI’s response included bromide, crucially without providing any health warnings or inquiring about the user’s intent – a critical omission that a human medical professional would invariably address.

The researchers emphasized that AI applications like ChatGPT risk generating “scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.” They also noted the unlikelihood that a qualified medical professional would ever suggest sodium bromide as a table salt replacement, further illustrating the disconnect between AI-generated advice and established medical practice.

The patient’s symptoms were severe and alarming. He presented at a hospital, exhibiting paranoia, believing his neighbor might be poisoning him, and displaying an irrational fear of the water he was offered despite intense thirst. Within 24 hours of admission, he attempted to escape the hospital and was subsequently sectioned and treated for psychosis. Once stabilized, he reported additional symptoms consistent with bromism, including facial acne, excessive thirst, and insomnia.

This incident comes as OpenAI, ChatGPT’s developer, has recently announced significant upgrades to the chatbot, now powered by the GPT-5 model. The company claims that one of GPT-5’s core strengths lies in health-related queries, promising improved accuracy and a proactive approach to “flagging potential concerns” like serious physical or mental illness. However, OpenAI has consistently stressed that its chatbot is not intended as a substitute for professional medical advice.

The case serves as a critical reminder for both patients and healthcare providers. While AI holds promise as a bridge between scientific knowledge and the public, it also carries the inherent risk of promoting decontextualized or dangerously misleading information. Medical professionals, the article concludes, will increasingly need to consider the influence of AI when assessing how patients obtain their health information.