Preferences and Ethical Priorities: Thinking Fast and Slow in AI

被引:0
|
作者
Rossi, Francesca [1 ]
Loreggia, Andrea [2 ]
机构
[1] IBM Res, Yorktown Hts, NY 10598 USA
[2] Univ Padua, Padua, Italy
关键词
Multi-agent system; Knowledge Representation; Decision Theory;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In AI, the ability to model and reason with preferences allows for more personalized services. Ethical priorities are also essential, if we want AI systems to make decisions that are ethically acceptable. Both data-driven and symbolic methods can be used to model preferences and ethical priorities, and to combine them in the same system, as two agents that need to cooperate. We describe two approaches to design AI systems that can reason with both preferences and ethical priorities. We then generalize this setting to follow Kahneman's theory of thinking fast and slow in the human's mind. According to this theory, we make decision by employing and combining two very different systems: one accounts for intuition and immediate but imprecise actions, while the other one models correct and complex logical reasoning. We discuss how such two systems could possibly be exploited and adapted to design machines that allow for both data-driven and logical reasoning, and exhibit degrees of personalized and ethically acceptable behavior.
引用
收藏
页码:3 / 4
页数:2
相关论文
共 50 条