Machiavelli for robots: Strategic robot failure, deception, and trust

被引:0
|
作者
Saetra, Henrik Skaug [1 ]
机构
[1] Ostfold Univ Coll, N-1757 Halden, Norway
关键词
MODEL;
D O I
10.1109/RO-MAN57019.2023.10309455
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Failure as a concept can refer both to a lack of objective success and a perceived lack of success in living up to others' expectations, requirements, or standards. Both kinds of failure tend to be seen as undesirable, and when an entity fails in some way, this has effects on how the entity is evaluated by those it interacts with. But failure is not all bad. Since it is human to err, erring can also potentially foster the perception of human-like qualities in non-humans. This allows for a discussion of strategic robot failure, which entails intentionally designing robots that are perceived as failing (by the human), while they are actually successful in achieving the (hidden) objectives of their designer. Such design strategies involve the use of deception to shape, for example, humans' trust in robots, to engender effective human-robot interaction (HRI). This article begins with a brief description of research on failure in HRI, with an emphasis on understanding the implications of robot failure for human trust and reliance in the robots. I then turn to the concept of failure and distinguish between an objective component (lack of success) and the subjective component (failure as not meeting the others' requirements or standards). This makes failure a relational concept that can only be fully understood through context and knowledge of the preferences, values, and expectations of the human in HRI. Through these considerations, I conclude by discussing the potential positive and negative implications of strategic robot failure, with a closing discussing of potential ethical objections to strategic robot failure.
引用
收藏
页码:1381 / 1388
页数:8
相关论文
共 50 条
  • [1] Machiavelli as a poker mate - A naturalistic behavioural study on strategic deception
    Palomaki, Jussi
    Yan, Jeff
    Laakasuo, Michael
    [J]. PERSONALITY AND INDIVIDUAL DIFFERENCES, 2016, 98 : 266 - 271
  • [2] Social robot deception and the culture of trust
    Sætra, Henrik Skaug
    [J]. Paladyn, 2021, 12 (01): : 276 - 286
  • [3] Social Robots Between Trust and Deception: The Impact on Institutions and Practices
    Dario, Paolo
    Ciuti, Gastone
    Pirni, Alberto
    Capasso, Marianna
    Bisconti, Piercosma
    [J]. PROCEEDINGS OF ROBOPHILOSOPHY - SOCIAL ROBOTS IN SOCIAL INSTITUTIONS, 2022, 366 : 677 - 682
  • [4] Human-Robot Interaction: Developing Trust in Robots
    Billings, Deborah R.
    Schaefer, Kristin E.
    Chen, Jessie Y. C.
    Hancock, Peter A.
    [J]. HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2012, : 109 - 110
  • [5] Children, the Elderly, and Interactive Robots Anthropomorphism and Deception in Robot Care and Companionship
    Sharkey, Amanda
    Sharkey, Noel
    [J]. IEEE ROBOTICS & AUTOMATION MAGAZINE, 2011, 18 (01) : 32 - 38
  • [6] Appearance, Belief and Deception in Niccolo Machiavelli
    Castorina, Franco
    [J]. ANACRONISMO E IRRUPCION, 2023, 12 (23): : 11 - 40
  • [7] TRAPPING THE PRINCE - MACHIAVELLI AND THE POLITICS OF DECEPTION
    DIETZ, MG
    [J]. AMERICAN POLITICAL SCIENCE REVIEW, 1986, 80 (03) : 777 - 799
  • [8] Deception trust
    Sobel, R
    [J]. ISRAEL JOURNAL OF MEDICAL SCIENCES, 1996, 32 (3-4): : 256 - 259
  • [9] Are Friendly Robots Trusted More? An Analysis of Robot Sociability and Trust
    Kadylak, Travis
    Bayles, Megan A.
    Rogers, Wendy A.
    [J]. ROBOTICS, 2023, 12 (06)
  • [10] Effects of Proactive Explanations by Robots on Human-Robot Trust
    Zhu, Lixiao
    Williams, Thomas
    [J]. SOCIAL ROBOTICS, ICSR 2020, 2020, 12483 : 85 - 95