Can Lethal Autonomous Robots Learn Ethics?

被引:0
|
作者
Narayanan, Ajit [1 ]
机构
[1] Auckland Univ Technol, Comp Sci, Auckland, New Zealand
关键词
Machine ethics; Lethal autonomous robots; Fuzzy logic;
D O I
10.1007/978-3-030-64984-5_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When lethal autonomous robots (LARs) are used in warfare, the issue of how to ensure they behave ethically given military necessity, rules of engagement and laws of war raises important questions. This paper describes a novel approach in which LARs acquire their own knowledge of ethics through generating data for a wide variety of simulated battlefield situations. Unsupervised learning techniques are used by the LAR to find naturally occurring clusters equating approximately to ethically justified and ethically unjustified lethal engagement. These cluster labels can then be used to learn moral rules for determining whether its autonomous actions are ethical in specific battlefield contexts. One major advantage of this approach is that it reduces the probability of the LAR picking up human biases and prejudices. Another advantage is that an LAR learning its own ethical code is more consistent with the idea of an intelligent autonomous agent.
引用
收藏
页码:230 / 240
页数:11
相关论文
共 50 条