Demis Hassabis on AI Safety
Key Points
He stated he would support a 'pause' on AI development if competitors agreed, to allow society and regulation to catch up (2025).
Hassabis stated that a key benchmark for true AGI would be the system's ability to invent novel scientific hypotheses, not just prove existing ones.
The CEO has stressed the need for rigorous testing for dangerous capabilities, such as deception, before deploying frontier AI models.
Summary
Frequently Asked Questions
Demis Hassabis holds a strong position that rigorous safety measures are crucial for advanced AI development, particularly concerning AGI. He is actively concerned about risks like misuse by bad actors and autonomous systems going out of control. He advocates for robust guardrails and testing.
Yes, Demis Hassabis has indicated he would support a pause on AI development, but only under the condition that major competitors also agree to stop. His primary goal for a pause would be to allow society and regulatory frameworks time to mature alongside the technology.
The main risks identified by the Google DeepMind CEO fall into two areas: dual-use concerns, where beneficial AI is weaponized by malicious actors, and the control problem, where highly capable autonomous systems might act misaligned with human values. He stresses the need to solve alignment scientifically.
Sources6
Google DeepMind's Demis Hassabis and the paradox of AI progress
Demis Hassabis' TIME100 on AlphaFold, AGI, and humanity. | TIME
Google DeepMind CEO Demis Hassabis on what's still needed for AGI — EA Forum
Letter to Sir Demis Hassabis - Pause AI
Google DeepMind CEO warns of peril should we lose control of AI systems
Unreasonably Effective AI with… – Google DeepMind: The Podcast – Apple Podcasts
* This is not an exhaustive list of sources.