Evaluating the Impact of Human Explanation Strategies on Human-AI Visual Decision-Making

被引:5
|
作者
Morrison, Katelyn [1 ]
Shin, Donghoon [2 ]
Holstein, Kenneth [1 ]
Perer, Adam [1 ]
机构
[1] Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh,PA,15213, United States
[2] University of Washington, 3960 Benton Lane NE, Seattle,WA,98195, United States
关键词
Artificial intelligence techniques - Causal explanations - Decision-making process - Decisions makings - Disaster relief - Explanation generation - Human use - Human-artificial intelligence collaboration - Human-centered explainable ai - Recent researches;
D O I
10.1145/3579481
中图分类号
学科分类号
摘要
Artificial intelligence (AI) is increasingly being deployed in high-stakes domains, such as disaster relief and radiology, to aid practitioners during the decision-making process. Explainable AI techniques have been developed and deployed to provide users insights into why the AI made certain predictions. However, recent research suggests that these techniques may confuse or mislead users. We conducted a series of two studies to uncover strategies that humans use to explain decisions and then understand how those explanation strategies impact visual decision-making. In our first study, we elicit explanations from humans when assessing and localizing damaged buildings after natural disasters from satellite imagery and identify four core explanation strategies that humans employed. We then follow up by studying the impact of these explanation strategies by framing the explanations from Study 1 as if they were generated by AI and showing them to a different set of decision-makers performing the same task. We provide initial insights on how causal explanation strategies improve humans' accuracy and calibrate humans' reliance on AI when the AI is incorrect. However, we also find that causal explanation strategies may lead to incorrect rationalizations when AI presents a correct assessment with incorrect localization. We explore the implications of our findings for the design of human-centered explainable AI and address directions for future work. © 2023 Owner/Author.
引用
收藏
相关论文
共 50 条
  • [1] The Impact of Imperfect XAI on Human-AI Decision-Making
    Morrison, Katelyn
    Spitzer, Philipp
    Turri, Violet
    Feng, Michelle
    Kühl, Niklas
    Perer, Adam
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (CSCW1)
  • [2] Effects of Explanation Strategy and Autonomy of Explainable AI on Human-AI Collaborative Decision-making
    Wang, Bingcheng
    Yuan, Tianyi
    Rau, Pei-Luen Patrick
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024, 16 (04) : 791 - 810
  • [3] Decision Making Strategies and Team Efficacy in Human-AI Teams
    Munyaka, Imani
    Ashktorab, Zahra
    Dugan, Casey
    Johnson, J.
    Pan, Qian
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2023, 7 (CSCW1)
  • [4] Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
    Schoeffer, Jakob
    De-Arteaga, Maria
    Kuehl, Niklas
    [J]. PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS (CHI 2024), 2024,
  • [5] Effective human-AI work design for collaborative decision-making
    Jain, Ruchika
    Garg, Naval
    Khera, Shikha N.
    [J]. KYBERNETES, 2023, 52 (11) : 5017 - 5040
  • [6] Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
    Chen, Valerie
    Liao, Q. Vera
    Wortman Vaughan, Jennifer
    Bansal, Gagan
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2023, 7 (CSCW2)
  • [7] The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features
    Goyal, Navita
    Baumler, Connor
    Tin Nguyen
    Daume, Hal, III
    [J]. PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024, 2024, : 155 - 180
  • [8] "DecisionTime": A Configurable Framework for Reproducible Human-AI Decision-Making Studies
    Salimzadeh, Sara
    Gadiraju, Ujwal
    [J]. ADJUNCT PROCEEDINGS OF THE 32ND ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, UMAP 2024, 2024, : 66 - 69
  • [9] An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making
    Jakubik, Johannes
    Schoeffer, Jakob
    Hoge, Vincent
    Voessing, Michael
    Kuehl, Niklas
    [J]. MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT I, 2023, 1752 : 353 - 368
  • [10] Human-AI interaction: Augmenting decision-making for IT leader's project selection
    Judkins, Jarrett T.
    Hwang, Yujong
    Kim, Soyean
    [J]. INFORMATION DEVELOPMENT, 2024,