Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM)

被引:4
|
作者
Carter, Owen B. J. [1 ,2 ]
Loft, Shayne [1 ]
Visser, Troy A. W. [1 ]
机构
[1] Univ Western Australia, Perth, Australia
[2] Univ Western Australia, Sch Psychol Sci, 35 Stirling Hwy, Perth, WA, Australia
基金
澳大利亚研究理事会;
关键词
human-automation teaming; trust; team communication; anthropomorphism; naturalistic decision making; INCREASES TRUST; METAANALYSIS; PERFORMANCE; PSYTOOLKIT; YOUNGER; FUTURE;
D O I
10.1177/00187208231218156
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Objective The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation. BackgroundAnthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures. Method Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 x 3 design compared SAM appearance (anthropomorphic avatar vs. camera eye) and voice inflection (monotone vs. meaningless vs. meaningful), with the meaningful inflections communicating contextually useful information about automated advice regarding certainty and uncertainty. Results Avatar SAM appearance was rated as more anthropomorphic than camera eye, and meaningless and meaningful inflections were both rated more anthropomorphic than monotone. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that meaningful inflections yielded better outcomes on these trust measures than monotone and meaningless inflections. Conclusion Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance. Application Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.
引用
收藏
页码:2485 / 2502
页数:18
相关论文
共 50 条
  • [1] The importance of incorporating risk into human-automation trust
    Stuck, Rachel E.
    Tomlinson, Brianna J.
    Walker, Bruce N.
    [J]. THEORETICAL ISSUES IN ERGONOMICS SCIENCE, 2022, 23 (04) : 500 - 516
  • [2] Continuous Error Timing in Automation: The Peak-End Effect on Human-Automation Trust
    Wang, Kexin
    Lu, Jianan
    Ruan, Shuyi
    Qi, Yue
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024, 40 (08) : 1832 - 1844
  • [3] Similarities and differences between human-human and human-automation trust: an integrative review
    Madhavan, P.
    Wiegmann, D. A.
    [J]. THEORETICAL ISSUES IN ERGONOMICS SCIENCE, 2007, 8 (04) : 277 - 301
  • [4] Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment
    Calhoun, Christopher S.
    Bobko, Philip
    Gallimore, Jennie J.
    Lyons, Joseph B.
    [J]. JOURNAL OF TRUST RESEARCH, 2019, 9 (01) : 28 - 46
  • [5] Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles
    Clare, Andrew S.
    Cummings, Mary L.
    Repenning, Nelson P.
    [J]. HUMAN FACTORS, 2015, 57 (07) : 1208 - 1218
  • [6] Not all trust is created equal: Dispositional and history-based trust in human-automation interactions
    Merritt, Stephanie M.
    Ilgen, Daniel R.
    [J]. HUMAN FACTORS, 2008, 50 (02) : 194 - 210
  • [7] Team Communication Behaviors of the Human-Automation Teaming
    Demir, Mustafa
    McNeese, Nathan J.
    Cooke, Nancy J.
    [J]. 2016 IEEE INTERNATIONAL MULTI-DISCIPLINARY CONFERENCE ON COGNITIVE METHODS IN SITUATION AWARENESS AND DECISION SUPPORT (COGSIMA), 2016, : 28 - 34
  • [8] Automation transparency: implications of uncertainty communication for human-automation interaction and interfaces
    Kunze, Alexander
    Summerskill, Stephen J.
    Marshall, Russell
    Filtness, Ashleigh J.
    [J]. ERGONOMICS, 2019, 62 (03) : 345 - 360
  • [9] Effects of trust in human-automation shared control: A human-in-the-loop driving simulation study
    Yin, Weiru
    Chai, Chen
    Zhou, Ziyao
    Li, Chenhao
    Lu, Yali
    Shi, Xiupeng
    [J]. 2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 1147 - 1154
  • [10] A human-automation interface model to guide automation design of system functions
    Kennedy, Joshua S.
    McCauley, Michael E.
    [J]. Naval Engineers Journal, 2007, 119 (01): : 109 - 124