Earlier this year, an uplifting story detailed how a mother turned to ChatGPT and discovered that her son was suffering from a rare neurological disorder, after more than a dozen doctors had failed to identify the real problem. Thanks to the AI chatbot, the family was able to access the required treatment and save a life.
Not every case of ChatGPT medical evaluation leads to a miraculous outcome. A report now claims ChatGPT doled out misleading medical advice ended up giving a person a rare condition called Bromide Intoxication, or Bromism, that leads to various neuropsychiatric issues such as psychosis and hallucinations.
Trust ChatGPT to give you a disease from a century ago.
A report published in the Annals of Internal Medicine describes a case involving a person who landed himself in a hospital due to bromism after seeking medical advice from ChatGPT regarding their health. The case is pretty interesting because the 60-year-old individual expressed doubt that their neighbour was discreetly poisoning them.
The whole episode began when the person came across reports detailing the negative impact of sodium chloride (aka common salt). After consulting with ChatGPT, the individual replaced the salt with sodium bromide, which eventually led to bromide toxicity.

“He was noted to be very thirsty but paranoid about water he was offered,” says the research report, adding that the patient distilled their own water and put multiple restrictions on what they consumed. The situation, however, soon worsened after being admitted to a hospital, and evaluations were conducted.
“In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability,” adds the report.
Don’t forget the friendly human doctor
The latest case of ChatGPT landing a person in a pickle is quite astounding, particularly due to the sheer rarity of the situation. “Bromism, the chronic intoxication with bromide is rare and has been almost forgotten,” says a research paper.
The use of bromine-based salts dates back to the 19th century, when it was recommended for curing mental and neurological diseases, especially in cases of epilepsy. In the 20th century, bromism (or bromide toxicity) was a fairly well-known problem. The consumption of bromide salts has also been documented as a form of sleep medication.
Over time, it was discovered that the consumption of bromide salts leads to nervous system issues such as delusions, lack of muscle coordination, and fatigue, though severe cases are characterized by psychosis, tremors, or even coma. In 1975, the US government restricted the use of bromides in over-the-counter medicines.

Now, the medical team that handled the case could not access the individual’s ChatGPT conversations, but they were able to obtain similar worryingly misleading answers in their test. OpenAI, on the other hand, thinks that AI bots are the future of healthcare.
“When we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do,” the team reported.
Yes, there are definitely cases where ChatGPT has helped a person with health issues, but we can only expect positive results when the AI is provided detailed context and comprehensive information. But despite that, experts suggest that one should exercise extreme caution.
“The ability of ChatGPT (GPT-4.5 and GPT-4) to detect the correct diagnosis was very weak for rare disorders,” says a research paper published in the Genes journal, adding that ChatGPT consultation can’t be taken as a replacement for proper evaluation by a doctor.
LiveScience contacted OpenAI around the issue and got the following response: “You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice,” and also highlighted that ‘OpenAI’s safety teams aim to reduce the risk of using the company’s services and to train the products to prompt users to seek professional advice.’
Indeed, one of the big promises with the launch of GPT-5, the company’s latest ChatGPT model, was to have fewer moments of inaccuracy or hallucinations, and be more focused on delivering ‘safe completions’, where users are guided away from potentially harmful answers. As OpenAI puts it: “[This] teaches the model to give the most helpful answer where possible, while still maintaining safety boundaries.”
The biggest hurdle, obviously, is that the AI assistant can’t reliably investigate the clinical features of a patient. Only when AI is deployed in a medical environment by certified health professionals can it yield trusted results.