Explainable AI: The Effect of Contradictory Decisions and Explanations on Users' Acceptance of AI Systems

被引:12
|
作者
Ebermann, Carolin [1 ]
Selisky, Matthias [1 ]
Weibelzahl, Stephan [1 ]
机构
[1] PFH Private Univ Appl Sci, Business Psychol, Gottingen, Germany
关键词
ARTIFICIAL-INTELLIGENCE; COGNITIVE-DISSONANCE; INFORMATION-SYSTEMS; EMPIRICAL-ASSESSMENT; ATTITUDE-CHANGE; EXPERT-SYSTEMS; ALGORITHM; DESIGN; MODEL; AROUSAL;
D O I
10.1080/10447318.2022.2126812
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Providing explanations of an artificial intelligence (AI) system has been suggested as a means to increase users' acceptance during the decision-making process. However, little research has been done to examine the psychological mechanism of how these explanations cause a positive or negative reaction in the user. To address this gap, we investigate the effect on user acceptance if decisions and the associated provided explanations contradict between an AI system and the user. An interdisciplinary research model was derived and validated by an experiment with 78 participants. Findings suggest that in decision situations with cognitive misfit users experience negative mood significantly more often and have a negative evaluation of the AI system's support. Therefore, the following article provides further guidance regarding new interdisciplinary approaches for dealing with human-AI interaction during the decision-making process and sheds some light on how explainable AI can increase users' acceptance of such systems.
引用
收藏
页码:1807 / 1826
页数:20
相关论文
共 50 条
  • [1] Understand and Testing AI Decisions with Explainable AI
    Chung K.
    VDI Berichte, 2022, 2022 (2405): : 249 - 256
  • [2] Counterfactual Explanations in Explainable AI: A Tutorial
    Wang, Cong
    Li, Xiao-Hui
    Han, Haocheng
    Wang, Shendi
    Wang, Luning
    Cao, Caleb Chen
    Chen, Lei
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4080 - 4081
  • [3] Putting explainable AI in context: institutional explanations for medical AI
    Theunissen, Mark
    Browning, Jacob
    ETHICS AND INFORMATION TECHNOLOGY, 2022, 24 (02)
  • [4] Putting explainable AI in context: institutional explanations for medical AI
    Mark Theunissen
    Jacob Browning
    Ethics and Information Technology, 2022, 24
  • [5] Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions
    Bhattacharya, Aditya
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [6] Ambient Explanations: Ambient Intelligence and Explainable AI
    Cassens, Joerg
    Wegener, Rebekah
    AMBIENT INTELLIGENCE (AMI 2019), 2019, 11912 : 370 - 376
  • [7] Build confidence and acceptance of AI-based decision support systems - Explainable and liable AI
    Nicodeme, Claire
    2020 13TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION (HSI), 2020, : 20 - 23
  • [8] Explainable AI as evidence of fair decisions
    Leben, Derek
    FRONTIERS IN PSYCHOLOGY, 2023, 14
  • [9] Collaborative Explainable AI: A Non-algorithmic Approach to Generating Explanations of AI
    Mamun, Tauseef Ibne
    Hoffman, Robert R.
    Mueller, Shane T.
    HCI INTERNATIONAL 2021 - LATE BREAKING POSTERS, HCII 2021, PT I, 2021, 1498 : 144 - 150
  • [10] Towards FAIR Explainable AI: a standardized ontology for mapping XAI solutions to use cases, explanations, and AI systems
    Adhikari, Ajaya
    Wenink, Edwin
    van der Waa, Jasper
    Bouter, Cornelis
    Tolios, Ioannis
    Raaijmakers, Stephan
    PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2022, 2022, : 562 - 568