ChatGPT advice sparks man\’s psychosis, medical journal alleges

ChatGPT advice sparks man\’s psychosis, medical journal alleges

ChatGPT’s Medical Claims Spark Both Triumph and Controversy

While an earlier heart‑warming account showcased a mother who, after more than a dozen failed examinations, turned to the AI chatbot and found the root of her son’s neurological mystery, a newer report reminds the public that the same platform’s medical insights can sometimes lead to misdiagnosis or unintended harm.

From Life‑Saving Diagnosis to Neuropsychiatric Misguidance

  • Life‑Saving Diagnosis – A mother discovered her son’s rare neurological disorder through ChatGPT, enabling the family to access the needed treatment and preserve a life.
  • Unintended Harm – A separate case illustrates how ChatGPT’s medical advice, rather than diagnosing correctly, suggested a treatment that caused Bromide Intoxication (Bromism), a condition that triggers neuropsychiatric symptoms such as psychosis and hallucinations.

Key Takeaways for Parents and Healthcare Professionals

  1. Verify AI Findings – Confirm any AI‑generated medical conclusion with a qualified healthcare provider before acting.
  2. Monitor Symptom Evolution – Be alert for signs of treatment side effects, especially when the AI advises medication that may interact with the body’s chemistry.
  3. Educate on AI Limitations – Communicate that AI chatbots provide information based on available data, but do not replace professional medical evaluation.

These dual stories underscore the promise and the perils of AI‑powered medical assistance. When used with caution, the technology can reveal hidden health conditions; yet, when unverified, it can inadvertently steer patients toward dangerous outcomes.

Trust ChatGPT to give you a disease from a century ago. 

A 60‑Year‑Old Man Brings Bromide Toxicity to the Hospital After ChatGPT Advice

Case Summary

An investigative note in the Annals of Internal Medicine details a hospital admission where a 60‑year‑old male patient arrived with bromism following an unexpected consultation with ChatGPT about his health.

Doubts About a Neighbor

The patient expressed a strong suspicion that his neighbour was poisoning him in a discreet manner, a claim that added to the perplexity of the situation.

Replacing Common Salt With Sodium Bromide

  1. He reviewed literature that highlighted the adverse effects of sodium chloride, the common salt used worldwide.
  2. Seeking a healthier alternative, he replaced his household salt with sodium bromide.
  3. The substitution ultimately led to bromide toxicity, forcing the patient to seek medical attention.

Implications for Public Health

This case underscores the importance of carefully evaluating sodium bromide as a substitute for sodium chloride, especially given the potential for significant toxicity.

Siri offloading user query to ChatGPT.

Nadeem Sarwar’s Hospital Encounter

Key Findings

  • Patient exhibited heavy thirst but expressed strong fear of water offered.
  • He engineered his own distilled water and imposed strict limits on consumables.
  • Admission to a hospital intensified distress, triggering comprehensive evaluations.

Initial 24‑Hour Course

Within the first day, the patient displayed:

  • Escalating paranoia
  • Auditory and visual hallucinations
  • Attempted escape, culminating in an involuntary psychiatric hold due to severe disability.

Related Insight: “I lost my only friend”: GPT‑5 reshapes ChatGPT’s personality, sparking debate.

Don’t forget the friendly human doctor

Revisiting Bromism: An Unexpected Case

Background

Bromism—persistent exposure to bromide salts—is a condition considered almost extinct today. A recent study highlights its lingering effects, noting that most people are unaware of the disease’s historical roots.

Historical Use

  • 19th‑century medicine prescribed bromine compounds for mental and neurological conditions, especially epilepsy.
  • By the 20th century, bromism had become a well‑documented problem, frequently used as a sleep aid.

Clinical Symptoms

Long‑term ingestion of bromide salts has been linked to:

  • Delusions and disorientation.
  • Loss of muscle coordination and persistent fatigue.
  • In severe instances, patients experienced psychosis, tremors, or even coma.

Regulatory Measures

Governments worldwide took steps to control bromide use. In 1975, the U.S. government restricted the sale of bromide salts in over‑the‑counter products, recognizing their dangerous side effects.

Nothing Phone 2a and ChatGPT voice mode.

AI and Healthcare: The Uncertain Path Ahead

Recent medical investigations revealed that ChatGPT 3.5 returned ambiguous, potentially harmful responses when queried about clinical substitutions. For instance, the model suggested that chloride could be replaced by bromide, citing the need for context but failing to issue a precise health warning. The medical professionals involved noted that an AI assistant would normally probe the underlying motives of such an inquiry, a step that the AI bot omitted.

When AI Assists, Context Matters

  • Positive outcomes usually coincide with detailed context and extensive data.
  • Expert caution underscores that an AI bot is not a substitute for thorough medical evaluation.

Diagnostics and Rare Disorders

A research paper published in the Genes Journal reported that GPT-4.5 and GPT-4 demonstrate weak diagnostic accuracy for infrequent disorders. The study emphasizes that AI consultation alone cannot replace the expertise of a qualified doctor.

OpenAI’s Official Stance

When LiveScience queried OpenAI, the response was unequivocal:

  • “Do not treat the output of our services as a sole source of truth or factual information.”
  • “Seek professional advice when faced with potentially harmful answers.”

OpenAI’s safety teams actively aim to reduce risks and encourage users to consult professionals.

Future Promises: GPT‑5 and Safer Completion

GPT‑5 promises fewer hallucinations and a focus on safe completions. The model is trained to provide the most helpful answer possible while respecting safety boundaries, according to OpenAI’s own description.

Nevertheless, the primary obstacle remains: an AI assistant cannot reliably examine a patient’s clinical features. Only by deploying AI in a medical environment under the guidance of certified professionals can it deliver trustworthy results.