Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles

被引:13
|
作者
Clare, Andrew S. [1 ]
Cummings, Mary L. [2 ,3 ]
Repenning, Nelson P. [4 ]
机构
[1] McKinsey & Company, San Francisco, CA USA
[2] Duke Univ, Duke Inst Brain Sci, Dept Mech Engn & Mat Sci, Durham, NC 27706 USA
[3] Duke Univ, Duke Elect & Comp Engn Dept, Durham, NC 27706 USA
[4] MIT, Sloan Sch Management, Management Sci & Org studies, Cambridge, MA 02139 USA
关键词
human supervisory control; unmanned vehicles; mixed-initiative planning; priming; gaming; SPREADING ACTIVATION; SELF-CONFIDENCE; MEMORY; PERFORMANCE; MANAGEMENT; ALGORITHM; ATTENTION; DESIGN; TASKS; AIDS;
D O I
10.1177/0018720815587803
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Objective: We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). Background: Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. Method: Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. Results: Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. Conclusion: Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. Application: These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation.
引用
收藏
页码:1208 / 1218
页数:11
相关论文
共 50 条
  • [1] The Role of Human-Automation Consensus in Multiple Unmanned Vehicle Scheduling
    Cummings, M. L.
    Clare, Andrew
    Hart, Christin
    [J]. HUMAN FACTORS, 2010, 52 (01) : 17 - 27
  • [2] The role of human-automation consensus in multiple unmanned vehicle scheduling (vol 52, pg 17, 2010)
    Cummings, M. L.
    Clare, A.
    Hart, C.
    [J]. HUMAN FACTORS, 2010, 52 (02) : 348 - 349
  • [3] Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM)
    Carter, Owen B. J.
    Loft, Shayne
    Visser, Troy A. W.
    [J]. HUMAN FACTORS, 2024, 66 (11) : 2485 - 2502
  • [4] The importance of incorporating risk into human-automation trust
    Stuck, Rachel E.
    Tomlinson, Brianna J.
    Walker, Bruce N.
    [J]. THEORETICAL ISSUES IN ERGONOMICS SCIENCE, 2022, 23 (04) : 500 - 516
  • [5] Flexible and Complemental Human-Automation Collaborative Architecture for Mission Planning
    Jia, Fan
    Liu, Hongfu
    Chen, Jing
    [J]. 2016 8TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC), VOL. 2, 2016, : 68 - 72
  • [6] Continuous Error Timing in Automation: The Peak-End Effect on Human-Automation Trust
    Wang, Kexin
    Lu, Jianan
    Ruan, Shuyi
    Qi, Yue
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024, 40 (08) : 1832 - 1844
  • [7] Similarities and differences between human-human and human-automation trust: an integrative review
    Madhavan, P.
    Wiegmann, D. A.
    [J]. THEORETICAL ISSUES IN ERGONOMICS SCIENCE, 2007, 8 (04) : 277 - 301
  • [8] Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment
    Calhoun, Christopher S.
    Bobko, Philip
    Gallimore, Jennie J.
    Lyons, Joseph B.
    [J]. JOURNAL OF TRUST RESEARCH, 2019, 9 (01) : 28 - 46
  • [9] Effect of automation transparency in the management of multiple unmanned vehicles
    Bhaskara, Adella
    Duong, Lain
    Brooks, James
    Li, Ryan
    McInerney, Ronan
    Skinner, Michael
    Pongracic, Helen
    Loft, Shayne
    [J]. APPLIED ERGONOMICS, 2021, 90
  • [10] Not all trust is created equal: Dispositional and history-based trust in human-automation interactions
    Merritt, Stephanie M.
    Ilgen, Daniel R.
    [J]. HUMAN FACTORS, 2008, 50 (02) : 194 - 210