AI in healthcare may offer great potential, but data bias risks remain
Medical applications of AI could amplify diagnostic errors if employed without strict standards and quality control, says expert
ANKARA
As artificial intelligence (AI) tools are developed rapidly and spread across sectors, such as healthcare, debates over its data biases, privacy, and legal liabilities deepen.
Agah Tugrul Korucu, an associate professor of computer education and an AI expert, told Anadolu that AI may offer great potential in the medical field, but it can be safely used only with a robust ethical and governance framework.
Korucu stated that data bias in the use of AI in healthcare is a practical problem affecting patient safety, as AI models learn from the data they are trained on. Systematic errors may come to the fore if specific age groups, socioeconomic classes, or geographic regions are underrepresented in the data sets the models learn from.
He mentioned Türkiye’s national health database, e-Nabiz, which presents a significant strategic advantage, but raw data alone does not generate value on its own, noting that the potential could become a risk if not processed correctly.
“When this system is used without data standards, quality control, or an ethical and legal framework, errors can grow as the scale increases,” he said.
The primary hurdles in medical AI nowadays are data quality, selection bias, labeling inconsistencies, and severe privacy risks, while issues like different data recording conventions between hospitals can mislead AI models. Creating standardized data terminology and institution-based quality metrics is essential, he added.
Korucu stated that unauthorized access to data like health records can have severe legal consequences, so they require strict anonymization and secure analysis environments.
Healthcare AI models add the most value by acting like a “second eye” in fields like radiology and pathology, as these systems reduce workloads by quickly flagging suspicious areas or filtering out normal scans, shortening diagnosis times.
“The role of AI has to be explained with transparency to the patient, and the physician has to remain as the decision maker due to the automation bias risk,” he said.
As for legal and ethical liabilities for erroneous referrals, Korucu stated that the academic literature supports a layered responsibility model, under which developers handle verification, healthcare institutions manage integration, and clinicians justify the final decision — the goal is to establish mechanisms to prevent mistakes from the outset.
As for the coming decade, Korucu noted that the combination of genomic data and AI could create significant transformations in personalized medicine.
He mentioned that this could enable precise pharmacogenomic approaches, which would allow doctors to prescribe the right medication as early as possible, while dramatically shortening the diagnosis time for rare diseases.
Korucu noted that the priority areas in medical AI in Türkiye should focus on radiology triage systems, intensive care early warning mechanisms, and chronic disease management, urging developers to test every new model rigorously in real-world clinical environments across diverse patient groups.
He dismissed the worries of AI replacing real medical professionals, saying: “The final clinical decision, responsibility, and trust relationship with the patient must remain with the human physician.”
“The future model is the ‘AI-assisted physician,’ where the technology accelerates decisions but the physician remains the decision maker,” he added.
*Writing by Emir Yildirim in Istanbul
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.
