AI’s medical diagnostic skills still need a check-up
Study led by University of À¶Ý®ÊÓÆµ researchers discovers high risk of medical misinformation in self-diagnoses by ChatGPT
µþ²âÌý
Media Relations
A University of À¶Ý®ÊÓÆµ-led study found that ChatGPT-4o, the AI language model by OpenAI, answered nearly two-thirds of open-ended medical diagnostic questions incorrectly. Researchers adapted almost 100 questions from a medical licensing exam to resemble real user queries and had medical students assess the responses. Only 37% of ChatGPT’s answers were correct, with most deemed unclear by experts and non-experts alike. The study highlights the risks of relying on AI for self-diagnosis, as people may receive false reassurance or unnecessary alarm. While the latest ChatGPT model performed better than previous versions, it still lacks the nuanced accuracy required for medical advice. The researchers urge caution, emphasizing that a human health-care practitioner remains the best source for medical diagnosis. The findings were published in JMIR Formative Research.
To read the full article, click here!