Artificial intelligence hinders development of deep, critical thinking skills: Experts
Learning new skills, topics, complex subjects like math require more of a ‘do-it-yourself’ approach, which AI eliminates with instant, easily accessible responses affected by confirmation bias, posing risks to young users, researchers say

ISTANBUL
The rapid rise and integration of artificial intelligence (AI) tools into daily life spurred debates over how the constant reliance on these systems hinder the deep and critical thinking skills.
AI’s ability to deliver instant and consistent answers can eliminate the constructive confusion needed in problem solving, academics and educators argue.
Young users in particular are now encouraged to leave the thinking to AI large language models (LLMs), they say.
Critics of AI tools emphasize that these systems should be used as supplementary thinking partners rather than alternatives to thinking, arguing that the reliance on LLMs can weaken long-term cognitive skills.
ChatGPT, one of the most popular AI chatbots worldwide, was found to reduce the brain activity and learning motivation in people that use it over time, according to a new research by the Massachusetts Institute of Technology (MIT).
The study titled “Your Brain on ChatGPT” by the MIT Media Lab divided 54 people aged 18-39 into three groups and tasked them with writing simple essays.
These groups consisted of the “LLM group,” “Search Engine group,” and “Brain-only group,” as participants used ChatGPT, Google searches, or only their own capacity to write.
An electroencephalogram (EEG) test, which measures the electrical activity in the brain, was conducted on the participants. LLM or ChatGPT users were found to have the lowest levels of cognitive engagement and cognitive load.
“The Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling,” the research read.
ChatGPT users were also found to repeat the same phrases instead of original ideas into their third essays, while some participants directly copied AI-generated text and submitted with minor edits — the Brain-only group, however, showed the highest brain activity and creativity, followed by the Search engine group.
Researchers warn of cognitive erosion
The MIT study warned that the regular use of generative AI tools like ChatGPT could risk cognitive erosion for young persons in particular, as these tools bypass the problem-solving, memory reinforcement, and creative thinking stages essential in learning.
The researchers of the study said the low levels of cognitive engagement could lead to slower development of critical thinking skills in the long term, thereby turning users into passive consumers.
While AI can support learning when properly integrated, users have to have a basic knowledge and a foundation of skills to reap the benefits, the study said. Clear boundaries in the use of AI in education need to be instated, as the use of AI at early ages can create cognitive habits that are difficult to reverse.
AI eliminates productive struggle
Avijit Ghosh, associate researcher at the University of Connecticut, told Anadolu that the instant answers AI can deliver can eliminate the “productive struggle” or “productive failure,” which is key in building cognitive strength and developing deep thinking.
Ghosh said that resorting to AI can be referred to as some type of metacognitive laziness, a rampant phenomenon among young users of LLMs, which can lead to a decline in critical thinking skills.
He noted that AI can enhance the ability to ask questions when used correctly, while personalized systems reinforce people’s already established beliefs, thus reducing the opportunity to encounter different perspectives.
Ghosh stated that AI can dull users’ curiosity instead of nurturing it, while the constant use of AI can affect how people access information and whether they question the information.
Access to immediate AI-generated responses harm brain’s learning
Barbara Oakley, professor of industrial and systems engineering at Oakland University, told Anadolu that prompting questions into AI that require cognitive intelligence may lead to unpredictable consequences.
“Here’s where cognitive offloading reveals its most serious risks — picture a nursing student in Finland who never learned her multiplication tables,” she said.
“She’s calculating a medication dose, types ‘10 x 10’ into her calculator, but accidentally hits an extra zero — the screen shows ‘1000.’ Without missing a beat, she accepts it. Why? She has no internal alarm system telling her something’s catastrophically wrong.”
Oakley said that individuals need to have basic understanding of a subject at hand before prompting questions into AI, which can enable AI to act as the individual’s assistant.
“In health care, engineering, finance, and countless other fields, that internalized knowledge isn’t outdated baggage — it’s our last line of defense when technology fails or when we make input mistakes,” she said.
“Picture this: A student hits a challenging math problem, feels that familiar frustration, and immediately asks ChatGPT for help, what seems like efficient learning is actually short-circuiting one of the brain’s most powerful learning mechanisms,” she said, emphasizing that people can’t tell between “genuine quality” from “the seemingly sophisticated but worthless output of AI” when they resort to LLMs for answers.
She emphasized that the use of AI without foundational knowledge of the topic at hand leads to an illusion of mastery over the topic.
Oakley stated that AI can reinforce existing biases in education, while reducing student curiosity in exploring topics or complex subjects like mathematics, which leads the brain to sort of turn off, advising that AI should be designed as a thinking partner to guide students to an ideal level of difficulty, at which the most effective learning occurs.
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.