Preventing AI Nuclear Armageddon
Nuclear history is rife with near-misses, with disaster averted by a human who chose to trust their own judgment, rather than blindly follow the information provided by machines. Applying artificial intelligence to nuclear weapons increases the chances that, next time, nobody will stop the launch.
GENEVA – It is no longer science fiction: the race to apply artificial intelligence to nuclear-weapons systems is underway – a development that could make nuclear war more likely. With governments worldwide acting to ensure the safe development and application of AI, there is an opportunity to mitigate this danger. But if world leaders are to seize it, they must first recognize just how serious the threat is.
In recent weeks, the G7 agreed on the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems, in order to “to promote safe, secure, and trustworthy AI worldwide,” and US President Joe Biden issued an executive order establishing new standards for AI “safety and security.” The United Kingdom also hosted the first global AI Safety Summit, with the goal of ensuring that the technology is developed in a “safe and responsible” manner.
But none of these initiatives adequately addresses the risks posed by the application of AI to nuclear weapons. Both the G7 code of conduct and Biden’s executive order refer only in passing to the need to protect populations from AI-generated chemical, biological, and nuclear threats. And UK Prime Minister Rishi Sunak did not mention the acute threat posed by nuclear-weapons-related AI applications at all, even as he declared that a shared understanding of the risks posed by AI had been reached at the AI Safety Summit.
To continue reading, register now.
Already have an account? Log in