How does the use of AI make it more likely nuclear weapons would be used?

Answer

Applied machine learning and autonomous systems mean faster warfare and an even shorter period in which decision-makers will have to choose whether to launch nuclear weapons or not. Right now, estimates are that the choice to launch nuclear weapons would be made in minutes. Autonomous systems can also lower the threshold to engage in armed conflict, including nuclear conflict.

There is no global agreement that human beings should be in the decision-making loop on nuclear weapons. There is still a debate about whether to remove human evaluation of data from the decision to launch a nuclear weapon, with several governments stating that they would never remove human input. However, given recent editorials and debate on the subject, the possibility of machines being programmed to make this existential decision still exists.

The process in which machines “choose” a course of action is becoming increasingly opaque as machine learning advances, to the point that these processes are called “black boxes” that even the humans that programme them don’t fully understand. Therefore, it is difficult for humans to check how and why a machine recommended a course of action to understand if the machine has been compromised, is malfunctioning or has bad programming that resulted in an unlawful or unintentional outcome. The stakes are too high with nuclear weapons to take this risk.

Also, as satellite and other intelligence detection systems become more advanced, it will become more difficult to keep locations of nuclear weapons secret, even those that were historically concealed like the ones on submarines. This could then cause nuclear-armed countries to use all their nuclear weapons earlier in a conflict, given that an adversary would seek to neutralise all known nuclear systems as soon as possible.

​​Princeton University’s Program on Science and Global Security has developed a virtual reality simulation of presidential decision-making in a nuclear weapons crisis to demonstrate that the current decision making process to launch nuclear weapons is already rushed and without time for adequate reflection. The use of AI would reduce this time still further. However, simply adding a human to make the final launch decision is not enough, as engineering psychologist John Hawley wrote in a 2017 study, “Humans are very poor at meeting the monitoring and intervention demands imposed by supervisory control.”

The history of nuclear weapons is riddled with near misses where nuclear war was only averted by a human choosing to disregard false positives presented by machines. One example that demonstrates the importance of having humans in the loop to correct machines, is clear in the story of the Soviet officer, Stanislav Petrov, who famously ignored the warning presented by nuclear detection technology of incoming U.S. nuclear missiles due to his scepticism of the machine, and in so doing prevented a massive humanitarian catastrophe.

Relying on individuals under conditions of extreme stress to be as sceptical of the technology as Petrov was is not a guarantee that a catastrophic error could be made.