FAQ: Will AI increase the risk of nuclear war?

The risks resulting from the rapid advances in cyber operations and artificial intelligence (AI) are still being discovered,  but when it comes to nuclear weapons, these technologies add another layer to an already unacceptable level of risk of nuclear weapons use. Below are answers to some frequently asked questions about these risks. 

  • Could artificial intelligence lead to nuclear apocalypse?

    For AI specifically,  the increased application of advanced machine learning in defense systems can speed up warfare – giving decision-makers even less time to consider whether or not to launch nuclear weapons - something already measured in minutes, as illustrated in the video below. The risks of unintended consequences from the application of new artificial intelligence technologies without understanding the full implications of these technologies. 

    The use of AI in satellite and other intelligence detection systems will also make it more difficult to keep historically concealed nuclear weapons, such as ballistic missile submarines, hidden. Alternatively, the ability to spoof these systems is unknown and can have terrifying consequences.

    These risks are also compounded by the other risks posed by the rise of emerging technologies and cyber warfare: 

    • Cyber attacks could manipulate the information decision-makers get to launch nuclear weapons, and interfere with the operation of nuclear weapons themselves;
    • It is impossible to eliminate the risk of core nuclear weapons systems being hacked or compromised;


    Could artificial intelligence lead to nuclear apocalypse? 

    Any combination of machine learning and nuclear weapons would lead to less human control over nuclear weapons launches as well as  a dangerous reduction of the already short time for decision-making in a nuclear crisis. This means there is a  greater risk they would  be used, which has led to growing concern among nuclear and AI experts

    While, as far as we know, no countries have enabled AI to make the actual decision to launch nuclear weapons, the use of AI in sensors and targeting shortens the already limited  time in which a decision on whether or not to use a nuclear weapon would be taken in the event of a crisis. There have been many historic examples of near misses, and simply adding a human to make the final launch decision is not enough. 


    As illustrated in Annie Jacobsen’s Nuclear War: a scenario and Princeton’s Plan A study, the current decision making process to launch nuclear weapons is already rushed and without time for adequate reflection. 

    It would be a decision taken in moments that would impact humanity for millennia. 

  • How does the use of AI make it more likely nuclear weapons would be used?

    Applied machine learning and autonomous systems mean faster warfare and an even shorter period in which decision-makers will have to choose whether to launch nuclear weapons or not. Right now, estimates are that the choice to launch nuclear weapons would be made in minutes. Autonomous systems can also lower the threshold to engage in armed conflict, including nuclear conflict.

    There is no global agreement that human beings should be in the decision-making loop on nuclear weapons. There is still a debate about whether to remove human evaluation of data from the decision to launch a nuclear weapon, with several governments stating that they would never remove human input. However, given recent editorials and debate on the subject, the possibility of machines being programmed to make this existential decision still exists.

    The process in which machines “choose” a course of action is becoming increasingly opaque as machine learning advances, to the point that these processes are called “black boxes” that even the humans that programme them don’t fully understand. Therefore, it is difficult for humans to check how and why a machine recommended a course of action to understand if the machine has been compromised, is malfunctioning or has bad programming that resulted in an unlawful or unintentional outcome. The stakes are too high with nuclear weapons to take this risk.

    Also, as satellite and other intelligence detection systems become more advanced, it will become more difficult to keep locations of nuclear weapons secret, even those that were historically concealed like the ones on submarines. This could then cause nuclear-armed countries to use all their nuclear weapons earlier in a conflict, given that an adversary would seek to neutralise all known nuclear systems as soon as possible.

    ​​Princeton University’s Program on Science and Global Security has developed a virtual reality simulation of presidential decision-making in a nuclear weapons crisis to demonstrate that the current decision making process to launch nuclear weapons is already rushed and without time for adequate reflection. The use of AI would reduce this time still further. However, simply adding a human to make the final launch decision is not enough, as engineering psychologist John Hawley wrote in a 2017 study, “Humans are very poor at meeting the monitoring and intervention demands imposed by supervisory control.”

    The history of nuclear weapons is riddled with near misses where nuclear war was only averted by a human choosing to disregard false positives presented by machines. One example that demonstrates the importance of having humans in the loop to correct machines, is clear in the story of the Soviet officer, Stanislav Petrov, who famously ignored the warning presented by nuclear detection technology of incoming U.S. nuclear missiles due to his scepticism of the machine, and in so doing prevented a massive humanitarian catastrophe.

    Relying on individuals under conditions of extreme stress to be as sceptical of the technology as Petrov was is not a guarantee that a catastrophic error could be made.

  • Are there specific international agreements about AI and nuclear weapons?

    International regulation of AI, particularly its application to the military sphere and nuclear weapons, needs to be negotiated and agreed.  

    In July 2023, the United Nations Secretary-General, Antonio Guterres, called for a legally binding treaty to ban "lethal autonomous weapons systems". One of his main worries, he told the UN Security Council, is the use of AI in connection with nuclear weapons: "I urge agreement on the general principle that human agency and control are essential for nuclear weapons and should never be withdrawn".

    There are processes underway including several leading states, such as the AI Safety Summits that kicked off last November in the UK, which are meant to address “the safe development and use of frontier AI technology.” Nuclear weapons should be central to this process.  

    France, the UK and The United States have all declared that they would never allow AI to control decision-making on the use of nuclear weapons. 

    The US and China have also started a dialogue on the need to ensure human decision-making remains central to nuclear weapons protocols, although as yet no substantive agreement has emerged 

    However, a fully international process is also needed.  A treaty banning autonomous weapons systems is necessary, but, in the case of nuclear weapons, a treaty already exists which prohibits the weapons comprehensively. With weapons of mass destruction, trying to anticipate, mitigate or regulate the additional risks posed by emerging technologies will never be enough. We have to remove nuclear weapons from the equation entirely.   The only way to eliminate all these risks is to eliminate nuclear weapons. 

  • What needs to be done to prevent AI causing a nuclear catastrophe?

    There are some steps that could help to counteract the increased risk of nuclear use posed by emerging technologies, such as increasing the decision-making time that leaders have to choose to launch nuclear weapons, and by taking nuclear weapons off of launch-on-warning status and delaying the processes to put them on high alert.  

    But in the case of nuclear weapons, such measures are insufficient. With nuclear weapons, trying to anticipate, mitigate or regulate the additional risks posed by emerging technologies will never be enough. Nuclear weapons need to be removed from the equation entirely. So, governments need to take action to stigmatise, prohibit and eliminate nuclear weapons by joining the Treaty on the Prohibition of Nuclear Weapons, which offers a clear path forward under international law to fair and verifiable nuclear disarmament.

    In addition, the nuclear-armed states must immediately stop modernising and expanding their nuclear arsenals. Efforts to expedite decision making processes, or to further automate the command, control and communications required for launching nuclear weapons increases risks.