Effects of Explanations by Robots on Trust Repair in Human-Robot Collaborations

被引:0
|
作者
Bai, Zhangyunfan [1 ]
Chen, Ke [1 ]
机构
[1] Zhejiang Univ, Dept Psychol & Behav Sci, Hangzhou, Peoples R China
关键词
Ethical and Trustworthy AI; Trust Repair; Human-Robot Collaboration;
D O I
10.1007/978-3-031-60611-3_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Trust can be undermined when robots deviate from human expectations in human-robot collaborations (HRC), whereas proactive trust repair strategies can be employed by robots to mitigate the negative impacts of trust violations. Drawing from the attribution theory, the current study investigated the effects of four types of explanation strategies on trust repair, namely internal-low integrity attribution, internal-low ability attribution, external attribution, and no repair. This study involved 149 university students in an online, between-subjects experiment scenario to simulate a situation in which a robot violates integrity-based trust in HRC. Participants' trust in the robot was measured before and after the trust violation across four time points. The results showed that external attribution outperforms internal-low ability attribution, internal-low integrity attribution, and no repair to restore trust. The explanatory strategies that induce individuals to attribute low integrity have the most negative impact on trust repair.
引用
收藏
页码:3 / 14
页数:12
相关论文
共 50 条
  • [31] When Should a Robot Apologize? Understanding How Timing Affects Human-Robot Trust Repair
    Nayyar, Mollik
    Wagner, Alan R.
    [J]. SOCIAL ROBOTICS, ICSR 2018, 2018, 11357 : 265 - 274
  • [32] Measurement of trust in human-robot collaboration
    Freedy, Amos
    DeVisser, Ewart
    Weltman, Gershon
    Coeyman, Nicole
    [J]. CTS 2007: PROCEEDINGS OF THE 2007 INTERNATIONAL SYMPOSIUM ON COLLABORATIVE TECHNOLOGIES AND SYSTEMS, 2007, : 106 - 114
  • [33] Effect of Different Listening Behaviors of Social Robots on Perceived Trust in Human-robot Interactions
    Naeimeh Anzabi
    Hiroyuki Umemuro
    [J]. International Journal of Social Robotics, 2023, 15 : 931 - 951
  • [34] Effect of Different Listening Behaviors of Social Robots on Perceived Trust in Human-robot Interactions
    Anzabi, Naeimeh
    Umemuro, Hiroyuki
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2023, 15 (06) : 931 - 951
  • [35] Promises and trust in human-robot interaction
    Cominelli, Lorenzo
    Feri, Francesco
    Garofalo, Roberto
    Giannetti, Caterina
    Melendez-Jimenez, Miguel A.
    Greco, Alberto
    Nardelli, Mimma
    Scilingo, Enzo Pasquale
    Kirchkamp, Oliver
    [J]. SCIENTIFIC REPORTS, 2021, 11 (01)
  • [36] Planning with Trust for Human-Robot Collaboration
    Chen, Min
    Nikolaidis, Stefanos
    Soh, Harold
    Hsu, David
    Srinivasa, Siddhartha
    [J]. HRI '18: PROCEEDINGS OF THE 2018 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2018, : 307 - 315
  • [37] Trust, but Verify: Autonomous Robot Trust Modeling in Human-Robot Collaboration
    Alhaji, Basel
    Prilla, Michael
    Rausch, Andreas
    [J]. PROCEEDINGS OF THE 9TH INTERNATIONAL USER MODELING, ADAPTATION AND PERSONALIZATION HUMAN-AGENT INTERACTION, HAI 2021, 2021, : 402 - 406
  • [38] Sensitivity To Perceived Mutual Understanding In Human-Robot Collaborations
    Jacq, Alexis David
    Magnan, Julien
    Ferreira, Maria Jose
    Dillenbourg, Pierre
    Paiva, Ana
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 2233 - 2235
  • [39] Human-Robot Co-Learning for Fluent Collaborations
    van Zoelen, Emma M.
    van den Bosch, Karel
    Neerincx, Mark
    [J]. HRI '21: COMPANION OF THE 2021 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2021, : 574 - 576
  • [40] To Ask or Not to Ask: A Foundation for the Optimization of Human-Robot Collaborations
    Cai, Hong
    Mostofi, Yasamin
    [J]. 2015 AMERICAN CONTROL CONFERENCE (ACC), 2015, : 440 - 446