Man Hospitalized After ChatGPT Gives Toxic Medical Advice

Futurism

A recent and alarming incident has cast a stark spotlight on the perilous intersection of artificial intelligence and personal health, revealing the critical dangers of blindly trusting AI for medical guidance. A 60-year-old man, seeking to eliminate salt from his diet, inadvertently poisoned himself with sodium bromide after following a suggestion from OpenAI’s ChatGPT. This severe reaction, known as bromism, led to his hospitalization and is believed by doctors to be the first documented case of AI-linked bromide poisoning.

The patient’s journey to the emergency department, as detailed in a new paper published in the Annals of Internal Medicine, began after he consulted ChatGPT about dietary changes. Inspired by his past studies in nutrition, he sought to remove chloride from his diet and, over a period of three months, replaced common table salt (sodium chloride) with sodium bromide, a substance he purchased online. Sodium bromide, once used in medicines for anxiety and insomnia, was phased out decades ago in the United States due to its severe health risks and is now primarily found in industrial products, pesticides, pool cleaners, and as a canine anticonvulsant.

Upon arrival at the hospital, the man presented with deeply concerning psychiatric symptoms, including intense paranoia and vivid auditory and visual hallucinations, even expressing a belief that his neighbor was poisoning him. His condition rapidly escalated into a psychotic episode, necessitating an involuntary psychiatric hold. Doctors administered intravenous fluids and antipsychotic medication, and as his mental state gradually improved over three weeks, he was able to disclose his use of ChatGPT for dietary advice. Physicians later confirmed the chatbot’s problematic recommendation by posing a similar question to ChatGPT, which again suggested bromide without adequate warnings about its toxicity. The man ultimately made a full recovery, being discharged from the hospital and remaining stable at a follow-up visit.

Bromism, the condition he developed, is a rare “toxidrome” resulting from chronic overexposure to bromide. Its symptoms are diverse and can include neurological and psychiatric manifestations such as confusion, irritability, memory loss, and psychosis, as well as gastrointestinal and dermatological issues like acne-like rashes and excessive thirst. The rarity of human bromism cases today underscores the dangerous nature of the AI’s uncautioned advice.

This incident serves as a stark warning about the inherent limitations and potential hazards of relying on large language models for critical health information. AI chatbots, while powerful, can “hallucinate” or generate false responses, often without sufficient context or safety warnings. The broader healthcare industry is increasingly recognizing these risks. ECRI, a non-profit patient safety organization, has identified the use of AI models in healthcare without proper oversight as the most significant health technology hazard for 2025. Concerns range from biases embedded in training data leading to disparate health outcomes to the risk of inaccurate or misleading information and the potential for AI system performance to degrade over time.

In response to these growing concerns, regulatory bodies and health organizations worldwide are working to establish guidelines and oversight. The World Health Organization (WHO) has issued considerations for the regulation of AI in health, emphasizing transparency and documentation. In the United States, federal initiatives, including President Joe Biden’s Executive Order on AI and the Health Data, Technology, and Interoperability (HTI-1) Final Rule, aim to ensure AI is safe, secure, and trustworthy in healthcare. State laws are also emerging, with some, like California’s AB 3030, requiring healthcare providers to disclose AI use to patients and obtain consent. A key principle across these efforts is that AI cannot be the sole basis for medical decisions; human review and oversight remain paramount. The FDA is also actively strengthening its regulation of AI-enabled medical products, focusing on a flexible yet science-based framework to ensure both innovation and patient safety.

While AI holds immense potential to augment clinical workflows and support healthcare professionals, this incident serves as a potent reminder that these tools are not a substitute for qualified medical advice. The ongoing need for robust safeguards, clear disclaimers, rigorous validation, and, most importantly, human oversight, is undeniable as AI continues to integrate into our daily lives.