New York– In a significant experiment, a US medical school used Open AI’s Chat-GPT 4 to see if it can make accurate diagnoses in challenging medical cases.
Physician-researchers at Beth Israel Deaconess Medical Center (BIDMC) in Boston found that Chat-GPT 4 selected the correct diagnosis nearly 40 per cent of the time.
Chat-GPT 4 also provided the correct diagnosis in its list of potential diagnoses in two-thirds of challenging cases, revealed the findings, published in JAMA.
“Recent advances in artificial intelligence have led to generative AI models that are capable of detailed text-based responses that score highly in standardised medical examinations,” said Adam Rodman, co-director of the Innovations in Media and Education Delivery (iMED) Initiative at BIDMC.
“We wanted to know if such a generative model could ‘think’ like a doctor, so we asked one to solve standardised complex diagnostic cases used for educational purposes. It did really, really well,” said Rodman, also an instructor in medicine at Harvard Medical School.
To assess the chatbot’s diagnostic skills, Rodman and colleagues used clinicopathological case conferences (CPCs), a series of complex and challenging patient cases including relevant clinical and laboratory data, imaging studies, and histopathological findings published in the New England Journal of Medicine for educational purposes.
Evaluating 70 CPC cases, the artificial intelligence exactly matched the final CPC diagnosis in 27 (39 per cent) of cases. In 64 per cent of the cases, the final CPC diagnosis was included in the AI’s differential — a list of possible conditions that could account for a patient’s symptoms, medical history, clinical findings and laboratory or imaging results.
“While chatbots cannot replace the expertise and knowledge of a trained medical professional, generative AI is a promising potential adjunct to human cognition in diagnosis,” said first author Zahir Kanjee, a hospitalist at BIDMC and assistant professor of medicine at Harvard Medical School.
“It has the potential to help physicians make sense of complex medical data and broaden or refine our diagnostic thinking,” he said.
While the study adds to a growing body of literature demonstrating the promising capabilities of AI technology, more research is needed on its optimal uses, benefits and limits, importantly on privacy issues to understand how these new AI models might transform health care delivery. (IANS)