World

X's policies sparked anti-Muslim, anti-migrant narratives after Southport attack: Report

X's content-ranking algorithms 'systematically prioritize' content that sparks outrage, engagement 'without adequate safeguards,' according to research

Burak Bir  | 06.08.2025 - Update : 06.08.2025
X's policies sparked anti-Muslim, anti-migrant narratives after Southport attack: Report

LONDON 

Design and policy choices of X created fertile ground for inflammatory and racist narratives targeting Muslims and migrants following last year's deadly Southport attack in the UK, a new analysis showed on Wednesday.

According to research published by Amnesty International, social media platform X played a "central role" in the spread of false narratives and harmful content, which contributed to racist violence against Muslim and migrant communities in Britain, following the tragic murder of three young girls in the town of Southport last summer.

The technical analysis of X’s open-source code or publicly available software showed that its recommender system, also known as content-ranking algorithms, "systematically prioritizes" content that sparks outrage, provokes heated exchanges, reactions, and engagement, without adequate safeguards to prevent or mitigate harm.

"Our analysis shows that X’s algorithmic design and policy choices contributed to heightened risks amid a wave of anti-Muslim and anti-migrant violence observed in several locations across the UK last year, and which continues to present a serious human rights risk today," said Pat de Brun, head of Big Tech Accountability at Amnesty International.

Far-right riots broke out across the UK following the stabbing attack by Axel Rudakubana in Southport on July 29 last year.

The violence was fueled by false online claims that the suspect, who is a British citizen born in Cardiff, Wales, was a Muslim asylum seeker.

Amid false claims circulating on social media platforms, many mosques, Islamic buildings, and hotels housing migrants were targeted across the country.  

Algorithm appears to have no mechanism for assessing potential for causing harm

According to the research, as long as a post drives engagement, the algorithm appears to have no mechanism for assessing the potential for causing harm, "at least not until enough users themselves report it."

"These design features provided fertile ground for inflammatory racist narratives to thrive on X in the wake of the Southport attack," it added.

The study also noted that an account on X called "Europe Invasion," known to publish anti-immigrant and Islamophobic content, posted shortly after news of the attack emerged that the suspect was "alleged to be a Muslim immigrant."

It noted that the post garnered over four million views and within 24 hours, all X posts speculating that the perpetrator was Muslim, a refugee, a foreign national, or arrived by boat, were tracked to have an estimated 27 million impressions.

Saying that the Southport tragedy occurred in the context of "major policy and personnel changes" at X, the study pointed out that since Elon Musk’s takeover in late 2022, X has laid off content moderation staff, reinstated previously banned accounts, disbanded Twitter’s Trust and Safety Advisory Council, and fired trust and safety engineers.

Numerous accounts that had been previously banned for hate or harassment, including that of Stephen Yaxley-Lennon, a far-right figure better known as Tommy Robinson, were also restored.

Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.