Hands of healthcare worker guiding hands of older adult on cane
Getty Images

(HealthDay News) — A chatbot outperforms physicians in clinical reasoning, according to a research letter published online April 1 in JAMA Internal Medicine.

Stephanie Cabral, MD, from the Beth Israel Deaconess Medical Center in Boston, and colleagues compared a large language model’s reasoning abilities against human performance using standards developed for physicians. Responses were compared for selected cases queried in GPT-4 (OpenAI) in August 2023 and from 21 internal medicine attending physicians and 18 residents.

The researchers found that median Revised-IDEA (R-IDEA) scores were 10 (range, 9 to 10) for chatbot, 9 (6 to 10) for attendings, and 8 (4 to 9) for residents. The chatbot had a significantly higher estimated probability of achieving high R-IDEA scores than attendings and residents and had significantly higher R-IDEA scores than attendings and residents. There were no significant differences in attendings’ and residents’ scores. For diagnostic accuracy, the chatbot performed similarly to attendings and residents. Scores were also similar for correct clinical reasoning and cannot-miss diagnosis inclusion. However, the chatbot had more frequent instances of incorrect clinical reasoning (13.8%) than residents (2.8%) but not attendings (12.5%).

“There are multiple steps behind a diagnosis, so we wanted to evaluate whether large language models are as good as physicians at doing that kind of clinical reasoning,” coauthor Adam Rodman, MD, also from Beth Israel, said in a statement. “It’s a surprising finding that these things are capable of showing the equivalent or better reasoning than people throughout the evolution of clinical case.”

Two authors disclosed ties to industry.

Abstract/Full Text (subscription or payment may be required)