Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing

被引:49
|
作者
Bauer, Kevin [1 ]
von Zahn, Moritz [2 ]
Hinz, Oliver [2 ]
机构
[1] Univ Mannheim, Informat Syst Dept, D-68161 Mannheim, Germany
[2] Goethe Univ, Informat Syst Dept, D-60323 Frankfurt, Germany
关键词
explainable artificial intelligence; user behavior; information processing; mental models; THEORETICAL FOUNDATIONS; MACHINE; EXPLANATIONS; SYSTEMS; PERSPECTIVES; ALGORITHMS; LOOKING; EXPERT;
D O I
10.1287/isre.2023.1199
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Because of a growing number of initiatives and regulations, predictions of modern artificial intelligence (AI) systems increasingly come with explanations about why they behave the way they do. In this paper, we explore the impact of feature-based explanations on users' information processing. We designed two complementary empirical studies where participants either made incentivized decisions on their own, with the aid of opaque predictions, or with explained predictions. In Study 1, laypeople engaged in the deliberately abstract investment game task. In Study 2, experts from the real estate industry estimated listing prices for real German apartments. Our results indicate that the provision of feature based explanations paves the way for AI systems to reshape users' sense making of information and understanding of the world around them. Specifically, explanations change users' situational weighting of available information and evoke mental model adjustments. Crucially, mental model adjustments are subject to the confirmation bias so that misconceptions can persist and even accumulate, possibly leading to suboptimal or biased decisions. Additionally, mental model adjustments create spillover effects that alter user behavior in related yet disparate domains. Overall, this paper provides important insights into potential downstream consequences of the broad employment of modern explainable AI methods. In particular, side effects of mental model adjustments present a potential risk of manipulating user behavior, promoting discriminatory inclinations, and increasing noise in decision making. Our findings may inform the refinement of current efforts of companies building AI systems and regulators that aim to mitigate problems associated with the black-box nature of many modern AI systems.
引用
收藏
页码:1582 / 1602
页数:22
相关论文
共 50 条
  • [1] Expl(AI)n It to Me – Explainable AI and Information Systems Research
    Kevin Bauer
    Oliver Hinz
    Wil van der Aalst
    Christof Weinhardt
    Business & Information Systems Engineering, 2021, 63 : 79 - 82
  • [2] Expl(AI)n It to Me - Explainable AI and Information Systems Research
    Bauer, Kevin
    Hinz, Oliver
    van der Aalst, Wil
    Weinhardt, Christof
    BUSINESS & INFORMATION SYSTEMS ENGINEERING, 2021, 63 (02) : 79 - 82
  • [3] (X)AI as a Teacher: Learning with Explainable Artificial Intelligence
    Spitzer, Philipp
    Goutier, Marc
    Kuehl, Niklas
    Satzger, Gerhard
    PROCEEDINGS OF THE 2024 CONFERENCE ON MENSCH UND COMPUTER, MUC 2024, 2024, : 571 - 576
  • [4] Artificial Intelligence (AI) and Machine Learning for Multimedia and Edge Information Processing
    Seng, Jasmine Kah Phooi
    Ang, Kenneth Li-minn
    Peter, Eno
    Mmonyi, Anthony
    ELECTRONICS, 2022, 11 (14)
  • [5] Towards Autonomous Developmental Artificial Intelligence: Case Study for Explainable AI
    Starkey, Andrew
    Ezenkwu, Chinedu Pascal
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2023, PT II, 2023, 676 : 94 - 105
  • [6] A literature review of artificial intelligence (AI) for medical image segmentation: from AI and explainable AI to trustworthy AI
    Teng, Zixuan
    Li, Lan
    Xin, Ziqing
    Xiang, Dehui
    Huang, Jiang
    Zhou, Hailing
    Shi, Fei
    Zhu, Weifang
    Cai, Jing
    Peng, Tao
    Chen, Xinjian
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2024, 14 (12) : 9620 - 9652
  • [7] Artificial Intelligence (AI) and Information Systems: Perspectives to Responsible AI
    Denis Dennehy
    Anastasia Griva
    Nancy Pouloudi
    Yogesh K. Dwivedi
    Matti Mäntymäki
    Ilias O. Pappas
    Information Systems Frontiers, 2023, 25 : 1 - 7
  • [8] Artificial Intelligence (AI) and Information Systems: Perspectives to Responsible AI
    Dennehy, Denis
    Griva, Anastasia
    Pouloudi, Nancy
    Dwivedi, Yogesh K.
    Mantymaki, Matti
    Pappas, Ilias O.
    INFORMATION SYSTEMS FRONTIERS, 2023, 25 (01) : 1 - 7
  • [9] On the information content of explainable artificial intelligence for quantitative approaches in finance
    Berger, Theo
    OR SPECTRUM, 2025, 47 (01) : 177 - 203
  • [10] The impact of artificial intelligence on users' entrepreneurial activities
    Li, Xueling
    Zhang, Xiaoyan
    Liu, Yuan
    Mi, Yuanying
    Chen, Yong
    SYSTEMS RESEARCH AND BEHAVIORAL SCIENCE, 2022, 39 (03) : 597 - 608