Effects of Explanation Strategy and Autonomy of Explainable AI on Human-AI Collaborative Decision-making

被引:0
|
作者
Wang, Bingcheng [1 ]
Yuan, Tianyi [1 ]
Rau, Pei-Luen Patrick [1 ]
机构
[1] Tsinghua Univ, Dept Ind Engn, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Autonomy; Decision-making; Explainability; Human-AI interaction; MENTAL WORKLOAD; SOCIAL PRESENCE; AUTOMATION; TRUST; ROBOTS; EMBODIMENT; MODEL;
D O I
10.1007/s12369-024-01132-2
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This study examined the effects of explanation strategy (global explanation vs. deductive explanation vs. contrastive explanation) and autonomy level (high vs. low) of explainable agents on human-AI collaborative decision-making. A 3 x 2 mixed-design experiment was conducted. The decision-making task was a modified Mahjong game. Forty-eight participants were divided into three groups, each collaborating with an agent with a different explanation strategy. Each agent had two autonomy levels. The results indicated that global explanation incurred the lowest mental workload and highest understandability. Contrastive explanation required the highest mental workload but incurred the highest perceived competence, affect-based trust, and social presence. Deductive explanation was found to be the worst in terms of social presence. The high-autonomy agents incurred lower mental workload and interaction fluency but higher faith and social presence than the low-autonomy agents. The findings of this study can help practitioners in designing user-centered explainable decision-support agents and choosing appropriate explanation strategies for different situations.
引用
收藏
页码:791 / 810
页数:20
相关论文
共 50 条
  • [1] Effective human-AI work design for collaborative decision-making
    Jain, Ruchika
    Garg, Naval
    Khera, Shikha N.
    [J]. KYBERNETES, 2023, 52 (11) : 5017 - 5040
  • [2] Evaluating the Impact of Human Explanation Strategies on Human-AI Visual Decision-Making
    Morrison, Katelyn
    Shin, Donghoon
    Holstein, Kenneth
    Perer, Adam
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2023, 7 (CSCW1)
  • [3] Improving Trust in AI with Mitigating Confirmation Bias: Effects of Explanation Type and Debiasing Strategy for Decision-Making with Explainable AI
    Ha, Taehyun
    Kim, Sangyeon
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2023,
  • [4] A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
    Schemmer, Max
    Hemmer, Patrick
    Nitsche, Maximilian
    Kuehl, Niklas
    Voessing, Michael
    [J]. PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, : 617 - 626
  • [5] The Impact of Imperfect XAI on Human-AI Decision-Making
    Morrison, Katelyn
    Spitzer, Philipp
    Turri, Violet
    Feng, Michelle
    Kühl, Niklas
    Perer, Adam
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (CSCW1)
  • [6] Explainable AI for enhanced decision-making
    Coussement, Kristof
    Abedin, Mohammad Zoynul
    Kraus, Mathias
    Maldonado, Sebastian
    Topuz, Kazim
    [J]. DECISION SUPPORT SYSTEMS, 2024, 184
  • [7] Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
    Schoeffer, Jakob
    De-Arteaga, Maria
    Kuehl, Niklas
    [J]. PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS (CHI 2024), 2024,
  • [8] "DecisionTime": A Configurable Framework for Reproducible Human-AI Decision-Making Studies
    Salimzadeh, Sara
    Gadiraju, Ujwal
    [J]. ADJUNCT PROCEEDINGS OF THE 32ND ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, UMAP 2024, 2024, : 66 - 69
  • [9] An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making
    Jakubik, Johannes
    Schoeffer, Jakob
    Hoge, Vincent
    Voessing, Michael
    Kuehl, Niklas
    [J]. MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT I, 2023, 1752 : 353 - 368
  • [10] Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque
    Uwe Peters
    [J]. AI and Ethics, 2023, 3 (3): : 963 - 974