TL;DR: While governments are putting out assurances AI won’t make the final decision to launch nuclear weapons, they are tight-lipped about whether they are putting AI in the information gathering and processing components that advise world leaders making the decision to launch nuclear weapons. In risk assessment, there’s little difference between wrong AI making the launch decision and a human informed by wrong AI making the launch decision.
TL;DR: While governments are putting out assurances AI won’t make the final decision to launch nuclear weapons, they are tight-lipped about whether they are putting AI in the information gathering and processing components that advise world leaders making the decision to launch nuclear weapons. In risk assessment, there’s little difference between wrong AI making the launch decision and a human informed by wrong AI making the launch decision.