Open letter warns of risks, urges global halt to Artificial Superintelligence research
Future of Life Institute urges halt to Artificial Superintelligence (ASI) development, with support from leading figures in science, politics

ANKARA/ISTANBUL
The Future of Life Institute has urged a ban Wednesday on research related to Artificial Superintelligence (ASI) until a scientific consensus is reached confirming that the technologies are safe and controllable, according to NBC News.
A letter by the US-based nonprofit organization addressed the risks posed by AI systems that could potentially surpass human intelligence in all areas.
While it acknowledged that innovative AI tools could enhance health and well-being, it warned that ASI could one day pose a threat to humanity.
ASI refers to a hypothetical form of artificial intelligence that would surpass human cognitive abilities across all domains, including creativity, problem-solving and decision-making.
The open letter stated: “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”
The declaration was signed by 865 individuals, including Nobel laureates, artists, politicians, business leaders and members of the British royal family, including Prince Harry, Duke of Sussex; and his wife Meghan Markle; as well as Nobel laureate and AI researcher Geoffrey Hinton; Apple co-founder Steve Wozniak; former Irish President Mary Robinson; comedian Stephen Fry; and actor Joseph Gordon-Levitt.
Founded in 2014, the Future of Life Institute focuses on promoting the safe and ethical development of artificial intelligence. The group previously drew global attention in 2023 when it urged a temporary pause in training advanced AI systems such as GPT-4.