Autonomous weapons systems and the moral equality of combatants

被引:5
|
作者
Skerker, Michael [1 ]
Purves, Duncan [2 ]
Jenkins, Ryan [3 ]
机构
[1] US Naval Acad, Annapolis, MD 21402 USA
[2] Univ Florida, Gainesville, FL USA
[3] Cal Poly State Univ, San Luis Obispo, CA 93407 USA
关键词
Lethal autonomous weapons; Moral equality of combatants; Just war theory; Military ethics; ROBOTS; RESPECT;
D O I
10.1007/s10676-020-09528-0
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS.
引用
收藏
页码:197 / 209
页数:13
相关论文
共 50 条