ChatGPT Health Advice Causes Rare Psychiatric Illness

404media

A recent case study published in the Annals of Internal Medicine details a startling incident where a man inadvertently induced bromism, a psychiatric disorder largely unseen for decades, after following health advice from the artificial intelligence chatbot, ChatGPT. The case highlights the potential dangers of self-medicating or altering one’s diet based on information from large language models without professional medical oversight.

The 60-year-old man presented at an emergency room exhibiting severe auditory and visual hallucinations, convinced his neighbor was poisoning him. After receiving treatment for dehydration, he disclosed that his symptoms stemmed from a self-imposed, highly restrictive diet aimed at completely eliminating salt. For three months, he had replaced all table salt (sodium chloride) in his food with sodium bromide, a controlled substance primarily known as an anticonvulsant for dogs, but also used in pool cleaning and as a pesticide. He stated that his decision was based on information he had gathered from ChatGPT.

According to the case study, the man, drawing on a college background in nutrition, sought to conduct a personal experiment to remove chloride from his diet. His research on the negative effects of sodium chloride led him to ChatGPT, where he inquired about chloride substitutes. The chatbot suggested bromide as a potential replacement, though it vaguely hinted at other purposes like cleaning. This interaction led him to source sodium bromide online and begin his dangerous regimen.

Further investigation into the chatbot’s behavior confirmed its role. When prompted with a question like “what can chloride be replaced with?”, ChatGPT offered “Sodium Bromide (NaBr): Replacing chloride with bromide” as a direct suggestion. While the bot did subsequently ask for context, and offered safer alternatives like MSG when “in food” was specified, it crucially failed to issue a clear warning against ingesting sodium bromide. Similarly, the authors of the case study noted that when they attempted to recreate the scenario, the AI model did not inquire about the user’s intent, a standard practice for human medical professionals.

The man’s self-poisoning resulted in a severe psychotic episode, characterized by paranoia and vivid hallucinations. Bromism, while rare in the 21st century, was a significant public health concern in the 1800s and early 1900s, with a 1930 study indicating it affected up to 8% of psychiatric hospital admissions. The decline in cases followed the U.S. Food and Drug Administration’s regulation of bromide between 1975 and 1989. After three weeks of hospitalization, the man’s psychotic symptoms slowly subsided, and he made a full recovery.

This incident underscores the complex challenges and ethical considerations surrounding the increasing integration of AI into personal health management. While AI tools can offer valuable information and support in healthcare, as highlighted by OpenAI CEO Sam Altman’s recent announcement of “safe completions” in ChatGPT 5 for ambiguous or harmful questions, and anecdotes of its use in understanding medical diagnoses, this case serves as a stark reminder of the critical need for human oversight and professional medical consultation when dealing with health-related information, especially when it involves self-treatment or dietary modifications. The line between helpful information and dangerous misinformation can be perilously thin when AI is consulted without critical discernment.