Boosting Human Competences With Interpretable and Explainable Artificial Intelligence

被引:2
|
作者
Herzog, Stefan M. [1 ]
Franklin, Matija [2 ]
机构
[1] Max Planck Inst Human Dev, Ctr Adapt Rat, Lentzealle 94, D-14195 Berlin, Germany
[2] UCL, Div Psychol & Language Sci, Causal Cognit Lab, London, England
来源
DECISION-WASHINGTON | 2024年 / 11卷 / 04期
关键词
explainable artificial intelligence; interpretable artificial intelligence; boosting; competences; knowledge; COGNITIVE FEEDBACK; METAANALYSIS; MODELS; BIAS;
D O I
10.1037/dec0000250
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Artificial intelligence (AI) is becoming integral to many areas of life, yet many-if not most-AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes-otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent-by design-faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people's competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and-because of XAI's drawbacks-preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.
引用
收藏
页码:493 / 510
页数:18
相关论文
共 50 条
  • [31] Explainable artificial intelligence in ophthalmology
    Tan, Ting Fang
    Dai, Peilun
    Zhang, Xiaoman
    Jin, Liyuan
    Poh, Stanley
    Hong, Dylan
    Lim, Joshua
    Lim, Gilbert
    Teo, Zhen Ling
    Liu, Nan
    Ting, Daniel Shu Wei
    CURRENT OPINION IN OPHTHALMOLOGY, 2023, 34 (05) : 422 - 430
  • [32] A Review of Explainable Artificial Intelligence
    Lin, Kuo-Yi
    Liu, Yuguang
    Li, Li
    Dou, Runliang
    ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: ARTIFICIAL INTELLIGENCE FOR SUSTAINABLE AND RESILIENT PRODUCTION SYSTEMS, APMS 2021, PT IV, 2021, 633 : 574 - 584
  • [33] Explainable Artificial Intelligence for Cybersecurity
    Sharma, Deepak Kumar
    Mishra, Jahanavi
    Singh, Aeshit
    Govil, Raghav
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 103
  • [34] Explainable Artificial Intelligence: A Survey
    Dosilovic, Filip Karlo
    Brcic, Mario
    Hlupic, Nikica
    2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2018, : 210 - 215
  • [35] A Systematic Review of Human-Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques
    Nazar, Mobeen
    Alam, Muhammad Mansoor
    Yafi, Eiad
    Su'ud, Mazliham Mohd
    IEEE ACCESS, 2021, 9 : 153316 - 153348
  • [36] Boosting the accuracy of property valuation with ensemble learning and explainable artificial intelligence: The case of Hong Kong
    Deng, Lin
    Zhang, Xueqing
    ANNALS OF REGIONAL SCIENCE, 2025, 74 (01):
  • [37] DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence
    Wani, Niyaz Ahmad
    Kumar, Ravinder
    Bedi, Jatin
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 243
  • [38] Fast and interpretable prediction of seismic kinematics of flexible retaining walls in sand through explainable artificial intelligence
    Pistolesi, Francesco
    Baldassini, Michele
    Volpe, Evelina
    Focacci, Francesco
    Cattoni, Elisabetta
    COMPUTERS AND GEOTECHNICS, 2025, 179
  • [39] Human-in-the-loop: Explainable or accurate artificial intelligence by exploiting human bias?
    Valtonen, Laura
    Makinen, Saku J.
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON ENGINEERING, TECHNOLOGY AND INNOVATION (ICE/ITMC) & 31ST INTERNATIONAL ASSOCIATION FOR MANAGEMENT OF TECHNOLOGY, IAMOT JOINT CONFERENCE, 2022,
  • [40] Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles
    Xie, Jiming
    Zhang, Yan
    Qin, Yaqin
    Wang, Bijun
    Dong, Shuai
    Li, Ke
    Xia, Yulan
    TRANSPORTATION RESEARCH INTERDISCIPLINARY PERSPECTIVES, 2025, 29