Autonomous Weapon Systems - An Alleged Responsibility Gap

被引:1
|
作者
Swoboda, Torben [1 ]
机构
[1] Univ Bayreuth, Univ Str 30, D-95447 Bayreuth, Germany
关键词
D O I
10.1007/978-3-319-96448-5_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In an influential paper Sparrow argues that it is immoral to deploy autonomous weapon systems (AWS) in combat. The general idea is that nobody can be held responsible for wrongful actions committed by an AWS because nobody can predict or control the AWS. I argue that this view is incorrect. The programmer remains in control when and how an AWS learns from experience. Furthermore, the programmer can predict the non-local behaviour of the AWS. This is sufficient to ensure that the programmer can be held responsible. I present a consequentialist argument arguing in favour of using AWS. That is, when an AWS classifies non-legitimate targets less often as legitimate targets, compared to human soldiers, then it is to be expected that using the AWS saves lives. However, there are also a number of reasons, e.g. risk of hacking, why we should still be cautious about the idea of introducing AWS to modern warfare.
引用
收藏
页码:302 / 313
页数:12
相关论文
共 50 条