If Our Aim Is to Build Morality Into an Artificial Agent, How Might We Begin to Go About Doing So?

被引:1
|
作者
Seeamber, Reneira [1 ]
Badea, Cosmin [2 ]
机构
[1] Imperial Coll London, London SW7 2BX, England
[2] Imperial Coll London, Dept Comp & philosophy, London SW7 2BX, England
关键词
Ethics; Artificial intelligence; Decision making; Buildings; Intelligent systems; Task analysis; Reinforcement learning;
D O I
10.1109/MIS.2023.3320875
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As AI becomes pervasive in most fields, from health care to autonomous driving, it is essential that we find successful ways of building morality into our machines, especially for decision making. However, the question of what it means to be moral is still debated, particularly in the context of AI. In this article, we highlight the different aspects that should be considered when building moral agents, including the most relevant moral paradigms and challenges. We also discuss the top-down and bottom-up approaches to design and the role of emotion and sentience in morality. We then propose solutions, including a hybrid approach to design and a hierarchical approach to combining moral paradigms. We emphasize how governance and policy are becoming ever more critical in AI ethics and in ensuring that the tasks we set for moral agents are attainable, that ethical behavior is achieved, and that we obtain good AI.
引用
收藏
页码:35 / 41
页数:7
相关论文
共 5 条