Machiavelli for robots: Strategic robot failure, deception, and trust

被引:0
|
作者
Saetra, Henrik Skaug [1 ]
机构
[1] Ostfold Univ Coll, N-1757 Halden, Norway
关键词
MODEL;
D O I
10.1109/RO-MAN57019.2023.10309455
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Failure as a concept can refer both to a lack of objective success and a perceived lack of success in living up to others' expectations, requirements, or standards. Both kinds of failure tend to be seen as undesirable, and when an entity fails in some way, this has effects on how the entity is evaluated by those it interacts with. But failure is not all bad. Since it is human to err, erring can also potentially foster the perception of human-like qualities in non-humans. This allows for a discussion of strategic robot failure, which entails intentionally designing robots that are perceived as failing (by the human), while they are actually successful in achieving the (hidden) objectives of their designer. Such design strategies involve the use of deception to shape, for example, humans' trust in robots, to engender effective human-robot interaction (HRI). This article begins with a brief description of research on failure in HRI, with an emphasis on understanding the implications of robot failure for human trust and reliance in the robots. I then turn to the concept of failure and distinguish between an objective component (lack of success) and the subjective component (failure as not meeting the others' requirements or standards). This makes failure a relational concept that can only be fully understood through context and knowledge of the preferences, values, and expectations of the human in HRI. Through these considerations, I conclude by discussing the potential positive and negative implications of strategic robot failure, with a closing discussing of potential ethical objections to strategic robot failure.
引用
收藏
页码:1381 / 1388
页数:8
相关论文
共 50 条
  • [41] Trust in Robots and AI
    De Pagter, Jesse
    Papagni, Guglielmo
    Crompton, Laura
    Funk, Michael
    Schwaninger, Isabel
    [J]. CULTURALLY SUSTAINABLE SOCIAL ROBOTICS, 2020, 335 : 619 - 622
  • [42] Trust and Bias in Robots
    Howard, Ayanna
    Borenstein, Jason
    [J]. AMERICAN SCIENTIST, 2019, 107 (02) : 86 - 89
  • [43] Robots, trust and war
    Simpson T.W.
    [J]. Philosophy & Technology, 2011, 24 (3) : 325 - 337
  • [44] INCREDIBLE MODERNISM Literature, trust and deception
    Poore, Benjamin
    [J]. TLS-THE TIMES LITERARY SUPPLEMENT, 2013, (5754): : 27 - 27
  • [45] Trust and deception in virtual societies - Introduction
    Castelfranchi, C
    Tan, YH
    [J]. TRUST AND DECEPTION IN VIRTUAL SOCIETIES, 2001, : XVII - XXXI
  • [46] Cyber Camouflage Games for Strategic Deception
    Thakoor, Omkar
    Tambe, Milind
    Vayanos, Phebe
    Xu, Haifeng
    Kiekintveld, Christopher
    Fang, Fei
    [J]. DECISION AND GAME THEORY FOR SECURITY, 2019, 11836 : 525 - 541
  • [47] The neural basis of deception in strategic interactions
    Volz, Kirsten G.
    Vogeley, Kai
    Tittgemeyer, Marc
    von Cramon, D. Yves
    Sutter, Matthias
    [J]. FRONTIERS IN BEHAVIORAL NEUROSCIENCE, 2015, 9
  • [48] Strategic Interviewing to Detect Deception: Cues to Deception across Repeated Interviews
    Masip, Jaume
    Blandon-Gitlin, Iris
    Martinez, Carmen
    Herrero, Carmen
    Ibabe, Izaskun
    [J]. FRONTIERS IN PSYCHOLOGY, 2016, 7
  • [49] Effects of Failure Types on Trust Repairs in Human-Robot Interactions
    Zhang, Xinyi
    Lee, Sun Kyong
    Maeng, Hoyoung
    Hahn, Sowon
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2023, 15 (9-10) : 1619 - 1635
  • [50] To trust a robot
    Rutkin, Aviva
    [J]. NEW SCIENTIST, 2015, 228 (3044) : 22 - 22