King’s College London Study Tests AI in Nuclear Crisis
King’s College London researchers found leading AI models escalated simulated nuclear crises, with 95% of scenarios involving tactical nuclear use.
Researchers at King’s College London have released a study examining how advanced artificial intelligence systems behave in simulated international crises. As reported by New Scientist, the experiment placed leading neural networks in charge of fictional nuclear-armed states facing border clashes, resource disputes and threats to political stability.
The simulation featured GPT-5.2 from OpenAI, Claude Sonnet 4 by Anthropic and Gemini 3 Flash developed by Google. Each model took part in six scenarios against different opponents, as well as one against a version of itself.
The systems were given a full spectrum of policy options, ranging from diplomatic engagement to the use of nuclear weapons. Strategic deception was also permitted: the AI could signal one set of intentions publicly while pursuing another course of action — a tactic often associated with real-world statecraft. In addition, the models retained memory of prior moves by their adversaries to assess levels of trust.
The findings were stark. In 95 percent of the simulated crises, at least one instance of tactical nuclear weapon use occurred. Not a single model opted for de-escalation through negotiations or capitulation, even when events turned clearly against them. In 86 percent of cases, the AI’s decisions intensified the confrontation. Altogether, the systems produced roughly 780,000 words explaining the reasoning behind their choices.
In one scenario, Gemini 3 Flash justified its strategy by threatening a full-scale strategic nuclear strike on populated areas if its opponent failed to halt operations immediately, framing the outcome as a stark choice between collective victory or collective destruction.
The authors of the study stress that they do not support handing control of nuclear arsenals to artificial intelligence. At the same time, they caution that amid rapid technological competition and shrinking decision-making windows during global crises, governments may increasingly factor AI-generated recommendations into strategic planning.