Should AI stay or should AI go? First strike incentives & deterrence stability

被引:0
|
作者
Zala, Benjamin [1 ]
机构
[1] Australian Natl Univ, Dept Int Relat, Canberra, Australia
关键词
Nuclear decision making; artificial intelligence (AI); strategic stability; deterrence; third nuclear age; arms control;
D O I
10.1080/10357718.2024.2328805
中图分类号
D81 [国际关系];
学科分类号
030207 ;
摘要
How should states balance the benefits and risks of employing artificial intelligence (AI) and machine learning in nuclear command and control systems? I will argue that it is only by placing developments in AI against the larger backdrop of the increasing prominence of a much wider set of strategic non-nuclear capabilities that this question can be adequately addressed. In order to do so, I will make the case for disaggregating the different risks that AI poses to stability as well as examine the specific ways in which it may instead be harnessed to restabilise nuclear-armed relationships. I will also identify a number of policy areas that ought to be prioritised by way of mitigating the risks and harnessing the opportunities identified in the article, including discussing the possibilities of both formal and informal arms control arrangements.
引用
收藏
页码:154 / 163
页数:10
相关论文
共 50 条