Man Follows ChatGPT Diet Advice, Develops Psychosis

Gizmodo

A recent medical case study has brought into sharp focus the alarming consequences of relying on artificial intelligence for critical personal advice, particularly in health matters. Doctors at the University of Washington documented a disturbing incident where a man developed psychosis attributed to bromide poisoning after meticulously following dietary recommendations generated by ChatGPT. This cautionary tale serves as a stark reminder of the potential pitfalls when sophisticated AI tools are used without human oversight.

The man’s ordeal began after he sought to reduce his intake of table salt (sodium chloride), a common dietary concern. Unable to find specific advice on replacing chloride, he turned to ChatGPT, reportedly asking the AI how chloride could be safely substituted. According to the case report, ChatGPT suggested bromide as an alternative. Trusting this guidance, the man began consuming sodium bromide, which he purchased online, for a period of three months.

Bromide compounds have a complex history in medicine. In the early 20th century, they were widely used to treat various conditions, including anxiety and insomnia. However, medical professionals eventually recognized that high or chronic doses of bromide could be toxic, leading ironically to neuropsychiatric issues—a condition known as bromism. By the 1980s, bromide had been largely phased out of most medications, and cases of poisoning became exceedingly rare, though it still appears in some veterinary products and dietary supplements. The current incident marks what is believed to be the first documented case of bromide poisoning directly influenced by AI advice.

The severity of the man’s condition became apparent when he was admitted to a local emergency room. He presented with acute agitation and paranoia, expressing fears that his neighbor was poisoning him. Despite experiencing thirst, he refused to drink water provided by staff. His symptoms escalated to include vivid visual and auditory hallucinations, culminating in a full-blown psychotic episode. His attempts to escape led doctors to place him under an involuntary psychiatric hold due to his severe mental impairment.

Medical staff, suspecting bromism early in his treatment, administered intravenous fluids and antipsychotic medication, which gradually stabilized his condition. Once coherent, the man disclosed his three-month regimen of sodium bromide, guided by ChatGPT. When doctors later tested ChatGPT 3.5 with a similar query, the AI did indeed suggest bromide as a possible replacement for chloride. While the AI’s response reportedly noted that the context of the replacement mattered, it critically failed to issue any warnings about the dangers of consuming bromide or inquire about the user’s reasons for seeking such information.

Fortunately, the man made a slow but steady recovery. He was eventually weaned off antipsychotic medication and discharged from the hospital after three weeks, remaining stable at a two-week follow-up appointment. The doctors involved in the case highlighted a crucial paradox: while AI tools like ChatGPT hold immense potential to democratize scientific information and bridge the gap between experts and the general public, they also carry a significant risk of disseminating decontextualized or misleading information. They noted, with considerable understatement, that a human medical expert would almost certainly never recommend substituting bromide for chloride in a diet. This case serves as a powerful reminder that while AI can offer vast amounts of data, the discernment and wisdom of human expertise remain irreplaceable, especially when health and well-being are at stake.