Study warns AI chatbots can pose risks when used for medical advice
Researchers say large language models may give inaccurate diagnoses and miss urgent cases, media reports
ANKARA
Using artificial intelligence (AI) chatbots to seek medical advice can be “dangerous,” according to a new study published in the journal Nature Medicine, media reports said on Tuesday.
The research, led by the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford, found that relying on AI to make medical decisions presents risks to patients due to a “tendency to provide inaccurate and inconsistent information.”
Rebecca Payne, a co-author of the study and a general practitioner, said: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognize when urgent help is needed,” she added.
In the study, nearly 1,300 participants were asked to identify possible health conditions and recommend next steps based on different scenarios.
Some participants used large language model software to obtain a potential diagnosis, while others relied on traditional methods such as consulting a GP.
Researchers found that AI tools often delivered a “mix of good and bad information,” which users struggled to distinguish.
While the chatbots “excel at standardized tests of medical knowledge,” the study concluded that their real-world use as medical tools “would pose risks to real users seeking help with their own medical symptoms.”
Lead author Andrew Bean said interacting with humans remains “a challenge” for even top-performing AI systems and expressed hope the findings would contribute to safer development of such tools.
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.
