December 22, 2024

Brighton Journal

Complete News World

Can GPT Chat help doctors with diagnosis? What the science says

Can GPT Chat help doctors with diagnosis?  What the science says
ChatGPT-4 demonstrated significant diagnostic accuracy when tested on the same clinical conditions as healthcare professionals.

(Ernie Mundell – HealthDay News) — The Doctors brain They are the best make decisions, But even most doctors intelligent can Benefit from some diagnostic help from ChatGPTRecommends a study recently

Important Advantage comes from a process Thinking called “probabilistic reasoning”: knowing the probabilities of something happening (or not happening). “Humans have difficulties “With probabilistic reasoning, decision-making is based on the calculation of probabilities,” explained the study’s lead author. Dr. Adam RodmanFrom Beth Israel Deaconess Medical Center in Boston.

“Probabilistic reasoning is one of many Elements that make up the diagnosis, A what An incredibly complex process A variety of uses Different cognitive strategiesHe explained in a Beth Israel press release. Support”.

The Beth Israel team used data from a previously published survey 550 health professionals. All were asked to do probabilistic reasoning Find five different medical cases.

Probabilistic reasoning, important in the clinical diagnostic process, is a skill that ChatGPT can provide significant support to healthcare professionals (illustrative image Infobae

However, in the new study, Rodman’s team assigned the same five cases to ChatGPT’s AI algorithm, Large Language Model (LLM), ChatGPT-4.

These cases include information from common medical tests, such as a chest scan for pneumonia, a mammogram for breast cancer, a stress test for coronary artery disease, and a urine culture for a urinary tract infection.

Based on that information, the chatbot used its own probabilistic reasoning to reassess the likelihood of different patient diagnoses. Of the five cases, the chatbot was more accurate than the human doctor in two; Equally accurate for the other two; and precision less than one. Researchers considered this a “tie” when comparing humans to chatbots for medical diagnosis.

But the ChatGBT-4 chatbot excelled when a patient’s tests came back negative (rather than positive), diagnosing more accurately than doctors in all five cases.

“People sometimes feel that the risk is too high after negative test results, which can lead to overtreatment, more testing and more drugs,” Rodman said. He is an internal medicine physician and researcher in the Department of Medicine at Beth Israel.

Study reveals that ChatGPT-4 is particularly useful in diagnosis when clinical tests are negative (Getty)

The study appears in the Dec. 11 issue of JAMA Network Open. Doctors may one day work with AI to be more accurate in diagnosing patients, the researchers said.

Rodman called the prospect “exciting.” “Although they are imperfect, ease of use [de los chatbots] And its ability to integrate into clinical workflows should, in theory, lead people to make better decisions,” he said. “Future research on joint human and artificial intelligence is sorely needed.”

More info

Learn more about AI and medicine at Harvard University.

Source: Beth Israel Deaconess Medical Center, news release, December 11, 2023

Ernie Mundell Health Correspondents © The New York Times 2023