Science-Technology, Artificial Intelligence

INTERVIEW – ‘Reasoning is unique to humans’: UN expert on AI’s expanding role in justice

As algorithms quietly enter courtrooms worldwide, UN special rapporteur Meg Satterthwaite says the right to a human judge must remain non-negotiable

Beyza Binnur Donmez  | 19.11.2025 - Update : 19.11.2025
INTERVIEW – ‘Reasoning is unique to humans’: UN expert on AI’s expanding role in justice

  • ‘Reasoning is unique to human beings … Artificial intelligence is not actually reasoning. It is predicting,’ says Satterthwaite
  • UN expert warns that ‘techno-capture’ by private firms could threaten judicial independence itself

GENEVA

Artificial intelligence is moving into courtrooms across the world far faster than most people realize, promising speed and efficiency while quietly reshaping how justice is delivered.

But what began as a tool to ease overloaded courts is now raising deeper questions about fairness, accountability, and the very meaning of judgment.

These questions sit at the heart of a new UN report on AI in justice systems, warning that many courts are adopting digital tools “on an ad hoc basis,” without safeguards to protect judicial independence.

Its author, Margaret Satterthwaite, the UN special rapporteur on the independence of judges and lawyers, told Anadolu that the shift underway is profound – and deeply risky if states fail to set limits.

“It’s very important that we really insist on this right to a human judge when we’re thinking about the use of artificial intelligence in judicial systems,” she said. “Reasoning is unique to human beings … Artificial intelligence is not actually reasoning. It is predicting.”

The concern, she emphasized, is not about banning technology. Courts can benefit from digital tools, but AI must never displace the human moral reasoning that sits at the core of judicial decision-making.

AI lacks the “human experience, the human sense of right and wrong, and a connection,” she said. “You need all of those things in order to reason as a person.”

Satterthwaite’s report acknowledges that, used carefully, AI can reduce barriers to justice. Around 49% of people worldwide encounter at least one legal problem every two years, yet fewer than one-third seek help, often because courts are expensive, distant, or linguistically inaccessible.

In that gap, some innovations are proving helpful. Spain’s Carpeta Justicia, or Justice Folder, system can summarize or translate legal documents into plain language, while Mexico’s Supreme Court developed Sor Juana, an AI tool that simplifies rulings for public use.

In Nigeria, the Podus AI system – accessible via WhatsApp and supporting three local languages – offers “legal first aid” and directs users to lawyers.

Satterthwaite points to translation as one area where AI is helping rather than replacing. “Translation is getting better and better almost by the day,” she said, especially for widely spoken languages.

But even this promise comes with caveats. Her report noted that AI translations remain far less reliable for so-called low-resource languages, such as Indigenous or minority tongues, and when AI misinterprets a legal term or offers wrong advice, users often have no clear way to contest the error.

More worrying are examples of AI systems deeply embedded in court processes – systems that can speed up proceedings but undermine fairness.

China’s Smart Court network has automated millions of cases, using facial recognition tools and algorithmic systems that make sentencing recommendations. While praised for speed, the system raises serious transparency and accountability concerns, with AI involved in evidence review and sentencing recommendations.

As Satterthwaite’s report notes, “Such systems may increase political oversight, rein in judicial autonomy, and ultimately undermine independence.”

Transparency, and the lack of it

AI’s “black box” nature makes oversight harder. Some courts already rely on tools that are opaque or potentially biased.

In Poland, the Random Allocation of Judges System came under scrutiny after it allegedly assigned 56 cases to one judge while leaving others with nearly none. In the US, algorithms such as COMPAS – used to predict reoffending – have influenced bail and sentencing decisions while studies show they disproportionately affect racial minorities.

“Bias is very important in this setting,” Satterthwaite explained. “If there’s bias in the data that’s used to train a large language model ... that bias will be baked into the results.”

Yet many systems are protected by trade secrecy. ShotSpotter, a gunshot detection tool admitted in at least 200 criminal trials in the US, has faced major accuracy concerns, but its source code remains undisclosed.

“If AI is used to automate judicial decisions, the ‘black box’ nature of AI tools may render the decision-making process so opaque and incontestable that the right to a fair trial is violated,” the UN report warned.

The problem, Satterthwaite said, is that many litigants do not even know AI was used in their case.

“If we know, for example, that the judicial system is using a specific AI tool, then if some issue of bias comes up, the parties can bring that issue up,” she said. “There’s no way to do that if we don’t know how the judiciary is using those tools.”

‘Techno-capture’ and risk of outsourcing justice

Another danger is “techno-capture,” when private vendors supplying AI tools gain influence over how courts operate. In lower-income countries, some companies offer to digitize entire court systems in exchange for access to judicial data – a trade Satterthwaite says amounts to a “transfer of power.”

“One of the very important decisions judiciaries need to face right away is that they need to have the capacity to use generative AI without training those tools,” she said. Judicial data, she noted, has a completely different sensitivity than commercial data.

She also warned that judges risk losing essential skills such as drafting judgments if they rely too heavily on algorithms.

“If you allow an AI to draft your reasoned decision, are you sort of outsourcing the reasoning part?” she asked. Many judges, she added, are comfortable using AI for summaries but “would like to draw the line at it assisting them in writing,” because “writing and thinking are so closely intertwined.”

Without careful boundaries, she said, “that kind of techno-capture can start to creep in.”

Call for caution and cooperation

Satterthwaite says countries urgently need clear rules – both to protect judicial independence and to ensure AI does not entrench inequities.

“It has to be up to judges to make this decision and to draw the line,” she said. “Of course, judges need training. They need to understand what AI can and can’t do.”

She warned that global cooperation is essential because AI power is concentrated in the hands of a few major companies and states.

“We really do need a solution that involves states and governments talking to each other. The problem is fairly straightforward in the sense of AI involves concentration of power,” she said. “So, there needs to be a frank discussion about this. It’s one of the reasons that multilateral organizations like the UN are so important.”

She also stressed the environmental cost of AI, a rarely acknowledged aspect in justice debates.

“It’s really important for us to think about the climate impact here,” she said. “Maybe we need to leave some AI uses aside and continue to do certain things by hand ... to minimize environmental and climate impact."

Ultimately, courts must learn to integrate technology without surrendering the values that define justice, Satterthwaite said, emphasizing that efficiency alone cannot justify tools that erode fairness.

In the end, she added, reasoning and judgment are “so central to what the task is,” and the core of judging cannot be automated.

Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.