A 60-year-old man developed a rare medical condition after ChatGPT advised him on alternatives to table salt, according to a case study published in The Annals of Internal Medicine.
The patient consulted the AI chatbot after studying the negative health effects of sodium chloride, or table salt, the study said. Following his ChatGPT interaction, the man allegedly eliminated the chloride from his diet by taking sodium bromide for three months. However, the study claimed he read “chloride can be swapped with bromide, though likely for other purposes, such as cleaning.” He ended up developing bromism and was hospitalized.
A 60-year-old man concerned about the health risks of sodium chloride, or table salt, consulted #ChatGPT for health advice. For 3 months, he replaced sodium chloride with sodium bromide, at the suggestion of ChatGPT.
He went to the ER because he feared his neighbor was… pic.twitter.com/PsDgTj6QD7
— Annals of Internal Medicine: Clinical Cases (@AnnalsofIMCC) August 7, 2025
Bromism was a “well-recognized” syndrome during the early 20th century, contributing to as much as eight percent of psychiatric admissions at that time, according to the study. Sodium bromide served as a sedative during that era.
The patient allegedly presented himself at a hospital claiming his neighbor might be poisoning him. Medical personnel said he displayed paranoia about drinking water despite experiencing excessive thirst. He attempted to escape within 24 hours of admission and was subsequently placed on an “involuntary psychiatric hold” and treated for psychosis, according to the study. After stabilizing, he reported additional bromism symptoms such as facial acne and insomnia.
University of Washington researchers who documented the case couldn’t access the patient’s ChatGPT conversation log, according to the study. When they queried ChatGPT themselves about chloride replacements, the chatbot allegedly suggested bromide without providing health warnings or asking why they sought such information, “as we presume a medical professional would do,” they wrote. (RELATED: ROOKE: The Robots Are Already Stealing Our Women And Trying To Kill Us)
The authors warned that AI apps could “generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.” OpenAI’s chatbot guidelines state it isn’t “intended for use in the diagnosis or treatment of any health condition,” according to The Guardian.