One chatbot, one diet tweak, and a hospital bill that could choke a heavyweight champ.
A 60-year-old man ditched table salt on GPT’s advice and, three months later, landed in critical care.
The “smart” swap? Sodium bromide — a toxic industrial chemical better suited for pools than pasta. Fatigue, hallucinations, paranoia — all from trusting a health AI that can’t tell food from floor cleaner. And yet, this case isn’t a glitch — it’s a preview of what happens when we outsource judgment to code.
The case, published in the Annals of Internal Medicine, details how the man followed GPT’s recommendation to replace sodium chloride with sodium bromide. In chemistry, the two look close on paper — in the body, bromide builds into a dangerous condition called bromism.
His symptoms ranged from skin eruptions and severe thirst to auditory and visual hallucinations so intense that he was put on psychiatric hold after trying to escape care.
Doctors treated him with fluids, electrolytes, and antipsychotics, eventually stabilizing him after three weeks. Experts say the AI likely pulled bromide from outdated or context-free sources, where it’s listed as a “replacement” in chemical reactions. The model doesn’t check for medical safety — it predicts text, not consequences.
Health professionals warn that GPT isn’t a substitute for trained medical judgment. Without regulation, integrated medical databases, or risk flags, large language models will keep surfacing harmful advice in the wrong context. In the age of “smart” everything, the smarter move might still be asking an actual doctor.
Because when GPT health advice can poison a man with a toxic salt substitute, the real danger isn’t the glitch — it’s the trust.