Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM)

被引:4
|
作者
Carter, Owen B. J. [1 ,2 ]
Loft, Shayne [1 ]
Visser, Troy A. W. [1 ]
机构
[1] Univ Western Australia, Perth, Australia
[2] Univ Western Australia, Sch Psychol Sci, 35 Stirling Hwy, Perth, WA, Australia
基金
澳大利亚研究理事会;
关键词
human-automation teaming; trust; team communication; anthropomorphism; naturalistic decision making; INCREASES TRUST; METAANALYSIS; PERFORMANCE; PSYTOOLKIT; YOUNGER; FUTURE;
D O I
10.1177/00187208231218156
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Objective The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation. BackgroundAnthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures. Method Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 x 3 design compared SAM appearance (anthropomorphic avatar vs. camera eye) and voice inflection (monotone vs. meaningless vs. meaningful), with the meaningful inflections communicating contextually useful information about automated advice regarding certainty and uncertainty. Results Avatar SAM appearance was rated as more anthropomorphic than camera eye, and meaningless and meaningful inflections were both rated more anthropomorphic than monotone. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that meaningful inflections yielded better outcomes on these trust measures than monotone and meaningless inflections. Conclusion Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance. Application Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.
引用
收藏
页码:2485 / 2502
页数:18
相关论文
共 50 条
  • [11] Multilevel Confirmatory Factor Analysis Reveals Two Distinct Human-Automation Trust Constructs
    Yamani, Yusuke
    Long, Shelby K.
    Sato, Tetsuya
    Braitman, Abby L.
    Politowicz, Michael S.
    Chancey, Eric T.
    [J]. HUMAN FACTORS, 2024, : 166 - 180
  • [12] Adaptable (Not Adaptive) Automation: The Forefront of Human-Automation Teaming
    Calhoun, Gloria
    [J]. HUMAN FACTORS, 2022, 64 (02) : 269 - 277
  • [13] Experimental Investigation of Calibration and Resolution in Human-Automation System Interaction
    Maehigashi, Akihiro
    Miwa, Kazuhisa
    Terai, Hitoshi
    Kojima, Kazuaki
    Morita, Junya
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2013, E96A (07) : 1625 - 1636
  • [14] Special issue on human-automation coagency
    Inagaki, T.
    [J]. COGNITION TECHNOLOGY & WORK, 2012, 14 (01) : 1 - 2
  • [15] A human-automation interface model to guide automation design of system functions
    Kennedy, Joshua S.
    McCauley, Michael E.
    [J]. NAVAL ENGINEERS JOURNAL, 2007, 119 (01) : 109 - 124
  • [16] Measuring Human-Automation Function Allocation
    Pritchett, Amy R.
    Kim, So Young
    Feigh, Karen M.
    [J]. JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2014, 8 (01) : 52 - 77
  • [17] Formal verification of human-automation interaction
    Degani, A
    Heymann, M
    [J]. HUMAN FACTORS, 2002, 44 (01) : 28 - 43
  • [18] History and future of human-automation interaction
    Janssen, Christian P.
    Donker, Stella F.
    Brumby, Duncan P.
    Kun, Andrew L.
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2019, 131 : 99 - 107
  • [19] Decision Referrals in Human-Automation Teams
    Kaza, Kesav
    Le Ny, Jerome
    Mahajan, Aditya
    [J]. 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2842 - 2847
  • [20] Modeling Human-Automation Function Allocation
    Pritchett, Amy R.
    Kim, So Young
    Feigh, Karen M.
    [J]. JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2014, 8 (01) : 33 - 51