AI in the courtroom: Justice revolution or risk to rights?
As AI enters the courtroom, experts caution that while it speeds analysis, it may ‘cherry-pick’ evidence

- 'We must ensure that AI strengthens justice rather than undermining fundamental rights,' says professor at John Jay College of Criminal Justice
ISTANBUL
The courtroom, long defined by human judgment and professional expertise, is now facing a technological test: can artificial intelligence (AI) step into the role of an expert witness?
As AI tools reshape how evidence is processed and presented, legal and medical experts remain divided on whether technology can ever replace human testimony.
Prof. Dr. Mohammed Ranayava of Marshall University, a physician and attorney, told Anadolu that the change AI brings to many sectors is nothing short of “revolutionary” during the 20th International Forensic Medicine Days held in the Turkish resort city of Antalya, which gathered nearly 800 experts from 27 countries.
He recalled how forensic experts once had to comb through endless medical records by hand.
“Prior to the advent of AI, we (forensic experts) had to actually physically look through records,” he said. “Today, AI can analyze hundreds of pages of PDF documents and summarize them in minutes, a task that previously took hours…AI is not the future, it's here, and it has already changed multiple things that an independent medical examiner does.”
For decades, expert witnesses in medicine relied on painstaking reviews of patient records, accident reports and exposure histories. Ranayava said AI has transformed this work almost overnight.
“A technology that I used six months ago is now ancient technology...AI has really changed the environment for an expert witness, and it continues to be growing by leaps and bounds.”
Yet efficiency brings new risks, particularly around privacy and reliability.
Ranayava warned that laws like the Health Insurance Portability and Accountability Act (HIPPA) Privacy Rule in the US and privacy acts in Europe place strict responsibility on professionals to keep sensitive medical data secure and confidential.
“The other challenge is that AI can try to please you sometimes and hallucinate some things that don't exist. Particularly, this is a problem with generative AI,” he said, stressing that human oversight remains essential.
Prof. Ali Kocak of the John Jay College of Criminal Justice in New York, meanwhile, views AI as a powerful tool rather than a witness.
“Courts and law enforcement use AI for tasks such as facial recognition, DNA mixture analysis, gunshot detection systems like ShotSpotter, and even predictive policing platforms," he told Anadolu.
Such outputs are generally treated as “non-testimonial data,” meaning they can be admitted in court without triggering the defendant’s right to confrontation.
Still, Kocak stressed that AI cannot yet serve as an expert witness, noting that an “expert witness is expected to explain their reasoning, methodology, and conclusions transparently so that the court can assess credibility. AI systems, especially those using machine learning, often operate as ‘black boxes.’”
They may deliver accurate results, he added, but cannot explain their reasoning to meet legal standards of reliability and reproducibility.
Kocak explained that recent court cases show how AI evidence is being treated differently. In Commonwealth v. Weeden (Pennsylvania, 2023), prosecutors used ShotSpotter, an AI system that detects gunfire, as key evidence. The court allowed it as machine-generated data, so it did not require cross-examination.
Likewise, in State v. Lester (North Carolina, 2025), call records and a spreadsheet from PenLink software were treated as routine business records, meaning they could be admitted without a live witness.
Unlike human experts, AI risks ‘cherry picking’ evidence
Looking to the future, Kocak argued that AI could expand the scope of forensic analysis while forcing the legal system to redefine accountability.
“If juries and judges begin to rely heavily on AI outputs, we may see a shift where human experts serve more as interpreters or validators of AI findings, rather than the primary source of expertise,” he said.
Ranayava agreed that courts are beginning to recognize AI’s role but stressed credibility above all.
“In my line of work, you cannot be 95% right and 5% wrong. You have to be absolutely correct in your analysis, in your data collection," he said.
He added that credibility requires judges to see both sides of the argument.
"We are, in the courtroom, not an advocate. We are the advocate for the truth,” he said, warning that expert witnesses cannot “cherry-pick,” a risk AI may pose.
“The challenge with AI is sometimes it tries to please you,” Ranayava noted.
If your prompt is not precise, he said, it may give the answer you want instead of considering other aspects.
Kocak emphasized that for AI to play a larger role, both transparency and legal reform are essential.
Technologically, “AI systems must become more transparent and explainable so their reasoning can be scrutinized in court,” he said.
He said that legally, “reforms are needed to allow defendants to question and test the reliability of AI systems -- much like they would cross-examine a human expert.”
Otherwise, he warned, “we risk blind trust in machines and trials where defendants cannot meaningfully challenge the evidence against them.”
“Ethically, we must ensure that AI strengthens justice rather than undermining fundamental rights like the ability to confront your accuser.”
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.