Science-Technology

Industry leaders warn artificial intelligence poses 'extinction’ threat

'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks,' say experts

Michael Hernandez  | 30.05.2023 - Update : 31.05.2023
Industry leaders warn artificial intelligence poses 'extinction’ threat

WASHINGTON

A broad spectrum of researchers and industry leaders warned Tuesday that advances in artificial intelligence (AI) pose the "risk of extinction" for humanity akin to nuclear war.

The nonprofit group, Center for AI Safety, issued a 22-word open letter co-signed by hundreds of experts, including the CEOs of three AI industry leaders -- Google DeepMind, Open AI and Anthropic -- two of three Turing Award winners considered the "godfathers" of AI, and the authors of the standard textbooks on AI, Deep Learning and Reinforced Learning.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," they warned.

The urgent appeal comes amid rapid advances in AI that have further increased fears of the technology's potential risks.

In April, a bipartisan group of US lawmakers introduced legislation to bar AI from being allowed to make launch decisions within the US' nuclear command and control process.

The Block Nuclear Launch by Autonomous ArtificialIntelligence Act would codify existing Pentagon policy that mandates human action initiate any nuclear launch and would bar federal funds from being used to carry out any launch by automated systems. Nuclear launches would require "meaningful human control" under the legislation.

Seemingly less nefarious uses of artificial intelligence, such as Open AI's chatbot ChatGPT, has led others to question the potential economic fallout that could ripple across a broad set of industries should AI be relied on for labor, upending societies globally.

Chatbots have also given rise to concerns that computer programs could deceive humans online and be used to spread propaganda and misinformation worldwide.

In a blog post last week, Open AI CEO Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever warned of the need for increased regulation in AI development with the technology poised to "exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations" in the next decade.

They proposed the creation of an AI-focused international body akin to the International Atomic Energy Agency to regulate "superintelligence efforts."

"Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.," they wrote.

"As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it," they added.


Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.