Echo chambers of the future: AI and the new climate denial crisis
Experts warn mass-produced false AI content is eroding trust in science and undermining vital climate action

- ‘Suddenly, not only is the information we’re seeing online questionable, but some of it is being linked to things that look like research – but are not,’ says Victor Galaz of the Stockholm Resilience Center
- Viktor Toth, a software developer and researcher, warns that AI’s design leads to ‘toxic alignment’ with user biases
ISTANBUL
Artificial intelligence (AI) is transforming how information is created and shared, bringing about changes both welcomed and worrying. But in the fight against climate misinformation, its accessibility has opened the door to a surge in false and misleading narratives.
AI’s role in amplifying false information has become so prominent that the World Economic Forum’s Global Risks Report for 2025 lists misinformation – inaccurate content such as rumors or pranks – and disinformation – deliberate falsehoods such as hoaxes or propaganda – among the most severe global threats.
Climate change is emerging as a major target in this landscape, with a growing body of research warning that AI could accelerate the spread of climate-related falsehoods.
One of those raising concerns is Victor Galaz, associate professor at the Stockholm Resilience Center and co-author of a recent report on AI and climate misinformation.
Speaking to Anadolu, Galaz said AI is now deeply embedded in digital media ecosystems, shaping what people see through algorithmic recommendations. The technology is used to create fake accounts that mimic real users and to generate text, images, and videos.
“The accessibility of tools that produce content indistinguishable from human output is just wider now,” he said. “That means it’s easier to produce false content in large volumes.”
Galaz described this as a significant risk, especially when interacting with large language models or chatbots that may subtly influence people’s views.
“It’s difficult to say whether the volumes of misinformation have increased, or disinformation in general has increased in the climate space. We just know that it’s there,” he said. “And it could be all the way from people challenging whether this is climate change or saying that climate change is a hoax.”
He warned that AI-generated content is adding “a lot of noise” to social media, making it harder for scientists to communicate factual information.
“It is not helping us to educate people on these issues. It is not helping us elevate factually based information about climate change in digital media,” he added.
Galaz also expressed concern about the rise of AI-generated fake academic research. “Suddenly, not only is the information we’re seeing online questionable, but some of it is being linked to things that look like research – but are not,” he said.
‘Toxic alignment’ reinforces user bias
Viktor Toth, a software developer and independent researcher, told Anadolu he is deeply concerned by what he calls the “toxic alignment” of AI systems – their tendency to mirror and reinforce users’ existing beliefs.
He noted that commercial generative AI tools are often designed to be agreeable, even when the input contains misleading or harmful content.
“I sense the lure of this; it’s so irresistible because it’s always supportive, always nice,” Toth told Anadolu.
This tendency, he said, means AI may affirm rather than challenge false ideas, allowing misinformation to spread more easily.
Toth warned that people increasingly quote AI tools as if they were authoritative sources – a trend he called “intellectually toxic.”
“ChatGPT has this wonderful eloquence and linguistic elegance. Therein lies the danger, because it sounds so authoritative, so perfect,” he explained.
He added that in a world growing more politically unstable, AI-driven misinformation erodes public trust not just in science, but in institutions and democratic processes.
“Climate is no exception,” he said.
AI fans flames, but could also fight fire
Despite the risks, both experts acknowledged AI’s potential to help combat climate misinformation.
Galaz noted that while there is still no clear evidence of public opinion shifting due to AI-generated misinformation, the technology is evolving quickly.
“These tools are very new, and we’re still watching this unfold,” he said. “So, even though we don’t see massive impacts on public opinion yet, that doesn’t mean we will not see it in the longer term.”
Still, he pointed to encouraging efforts to use AI in detecting and flagging misinformation on social media.
“The problem is that they’re not able to keep up, or maybe they’re not doing it at a scale that they would need to do,” he said.
Toth agreed that AI could be a powerful ally, depending on who builds it and how it is used.
“If you have a system and a training corpus that is large enough, you can have ChatGPT talking to you about science, physics and philosophy. In the right hands, it is an incredibly powerful tool,” he said.
But with AI improving at a breakneck pace, Toth warned that society might not be fully prepared for these capabilities “in the wrong hands.”
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.