Press release from the Center for A.I. Safety
San Francisco, CA – Distinguished AI scientists, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, and leaders of the major AI labs, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind, have signed a single-sentence statement from the Center for AI Safety that reads:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This represents a historic coalition of AI experts. The coalition includes philosophers, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists. They are establishing the risk of extinction from advanced, future AI systems as one of the world’s most important problems. The statement affirms growing public sentiment: a recent poll found that 61 percent of Americans believe AI threatens humanity’s future.
The increasing concern about the potential impacts of AI is reminiscent of early discussions about atomic energy. “We knew the world would not be the same,” J. Robert Oppenheimer once recounted. He later called for international coordination to avoid nuclear war. “We need to be having the conversations that nuclear scientists were having. This was before the creation of the atomic bomb,” said Dan Hendrycks, Director of the Center for AI Safety.
It’s crucial that the negative impacts of AI that are already being felt across the world are addressed. We must also have the foresight to anticipate the risks posed by more advanced AI systems. “Pandemics were not on the public’s radar before COVID-19. It’s not too early to put guardrails in place and set up institutions so that AI risks don’t catch us off guard,” Hendrycks said. “As we grapple with immediate AI risks like malicious use, misinformation, and disempowerment, the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence. Mitigating the risk of extinction from AI will require global action. The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems.”
Signatories were verified through email confirmation or personal contact. The organization that hosts the statement, the Center for AI Safety, is a non-profit with the mission of reducing societal-scale risk from AI through research, field-building, and advocacy. You can learn more about AI safety and the work CAIS is doing at safe.ai and stay on top of the latest in AI safety news by subscribing to the weekly AI Safety newsletter.
Notable signatories of the statement include:
- CEOs of top AI labs: Sam Altman, Demis Hassabis, and Dario Amodei
- The authors of the standard textbook on Artificial Intelligence (Stuart Russell and Peter Norvig)
- Two authors of the standard textbook on Deep Learning (Ian Goodfellow and Yoshua Bengio)
- An author of the standard textbook on Reinforcement Learning (Andrew Barto)
- Three Turing Award winners (Geoffrey Hinton, Yoshua Bengio, and Martin Hellman)
- Executives from Microsoft, OpenAI, Google, Google DeepMind, and Anthropic—leaders from Meta have not signed
- The scientists behind famous AI systems such as AlphaGo and every version of GPT (David Silver, Ilya Sutskever)
- The top two most cited computer scientists (Hinton and Bengio), and the most cited scholar in computer security and privacy (Dawn Song)
- AI professors from Chinese universities
- Professors who study pandemics, climate change, and nuclear technology
- Other signatories include Marian Rogers Croak (inventor of VoIP–Voice over Internet Protocol), Kersti Kaljulaid (Former President of the Republic of Estonia), and more
[
Leave a Reply