Adaptive trust calibration for human-AI collaboration

被引:67
|
作者
Okamura, Kazuo [1 ]
Yamada, Seiji [1 ,2 ]
机构
[1] Grad Univ Adv Studies SOKENDAI, Sch Multidisciplinary Sci, Dept Informat, Tokyo, Japan
[2] Natl Inst Informat, Digital Content & Media Sci Res Div, Tokyo, Japan
来源
PLOS ONE | 2020年 / 15卷 / 02期
关键词
AUTOMATION; TRANSPARENCY; CONFIDENCE;
D O I
10.1371/journal.pone.0229132
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user's reliance behavior and cognitive cues called "trust calibration cues" to prompt the user to reinitiate trust calibration. We evaluated our framework and four types of trust calibration cues in an online experiment using a drone simulator. A total of 116 participants performed pothole inspection tasks by using the drone's automatic inspection, the reliability of which could fluctuate depending upon the weather conditions. The participants needed to decide whether to rely on automatic inspection or to do the inspection manually. The results showed that adaptively presenting simple cues could significantly promote trust calibration during over-trust.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Empirical Evaluations of Framework for Adaptive Trust Calibration in Human-AI Cooperation
    Okamura, Kazuo
    Yamada, Seiji
    [J]. IEEE ACCESS, 2020, 8 : 220335 - 220351
  • [2] A Quantum Model of Trust Calibration in Human-AI Interactions
    Roeder, Luisa
    Hoyte, Pamela
    van der Meer, Johan
    Fell, Lauren
    Johnston, Patrick
    Kerr, Graham
    Bruza, Peter
    [J]. ENTROPY, 2023, 25 (09)
  • [3] Take It, Leave It, or Fix It: Measuring Productivity and Trust in Human-AI Collaboration
    Qian, Crystal
    Wexler, James
    [J]. PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024, 2024, : 370 - 384
  • [4] Exploring Trust in Human-AI Collaboration in the Context of Multiplayer Online Games
    Hou, Keke
    Hou, Tingting
    Cai, Lili
    [J]. SYSTEMS, 2023, 11 (05):
  • [5] Human-AI Collaboration in Recruitment and Selection
    Natarajan, Neil
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 7089 - 7090
  • [6] Human-AI Collaboration with Bandit Feedback
    Gao, Ruijiang
    Saar-Tsechansky, Maytal
    De-Arteaga, Maria
    Han, Ligong
    Lee, Min Kyung
    Lease, Matthew
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1722 - 1728
  • [7] Diverse Conventions for Human-AI Collaboration
    Sarkar, Bidipta
    Shih, Andy
    Sadigh, Dorsa
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] AI in Education, Learner Control, and Human-AI Collaboration
    Brusilovsky, Peter
    [J]. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2024, 34 (01) : 122 - 135
  • [9] AI in Education, Learner Control, and Human-AI Collaboration
    Peter Brusilovsky
    [J]. International Journal of Artificial Intelligence in Education, 2024, 34 : 122 - 135
  • [10] Specifying AI Objectives as a Human-AI Collaboration Problem
    Dragan, Anca
    [J]. AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, : 329 - 329