Are there specific international agreements about AI and nuclear weapons?


International regulation of AI, particularly its application to the military sphere and nuclear weapons, needs to be negotiated and agreed.  

In July 2023, the United Nations Secretary-General, Antonio Guterres, called for a legally binding treaty to ban "lethal autonomous weapons systems". One of his main worries, he told the UN Security Council, is the use of AI in connection with nuclear weapons: "I urge agreement on the general principle that human agency and control are essential for nuclear weapons and should never be withdrawn".

There are processes underway including several leading states, such as the AI Safety Summits that kicked off last November in the UK, which are meant to address “the safe development and use of frontier AI technology.” Nuclear weapons should be central to this process.  

France, the UK and The United States have all declared that they would never allow AI to control decision-making on the use of nuclear weapons. 

The US and China have also started a dialogue on the need to ensure human decision-making remains central to nuclear weapons protocols, although as yet no substantive agreement has emerged 

However, a fully international process is also needed.  A treaty banning autonomous weapons systems is necessary, but, in the case of nuclear weapons, a treaty already exists which prohibits the weapons comprehensively. With weapons of mass destruction, trying to anticipate, mitigate or regulate the additional risks posed by emerging technologies will never be enough. We have to remove nuclear weapons from the equation entirely.   The only way to eliminate all these risks is to eliminate nuclear weapons.