Boosting Human Competences With Interpretable and Explainable Artificial Intelligence

被引:2
|
作者
Herzog, Stefan M. [1 ]
Franklin, Matija [2 ]
机构
[1] Max Planck Inst Human Dev, Ctr Adapt Rat, Lentzealle 94, D-14195 Berlin, Germany
[2] UCL, Div Psychol & Language Sci, Causal Cognit Lab, London, England
来源
DECISION-WASHINGTON | 2024年 / 11卷 / 04期
关键词
explainable artificial intelligence; interpretable artificial intelligence; boosting; competences; knowledge; COGNITIVE FEEDBACK; METAANALYSIS; MODELS; BIAS;
D O I
10.1037/dec0000250
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Artificial intelligence (AI) is becoming integral to many areas of life, yet many-if not most-AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes-otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent-by design-faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people's competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and-because of XAI's drawbacks-preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.
引用
收藏
页码:493 / 510
页数:18
相关论文
共 50 条
  • [21] An interpretable model for bridge scour risk assessment using explainable artificial intelligence and engineers' expertise
    Wang, Tianyu
    Reiffsteck, Philippe
    Chevalier, Christophe
    Chen, Chi-Wei
    Schmidt, Franziska
    STRUCTURE AND INFRASTRUCTURE ENGINEERING, 2023,
  • [22] An Interpretable Solar Photovoltaic Power Generation Forecasting Approach Using An Explainable Artificial Intelligence Tool
    Sarp, Salih
    Kuzlu, Murat
    Cali, Umit
    Elma, Onur
    Guler, Ozgur
    2021 IEEE POWER & ENERGY SOCIETY INNOVATIVE SMART GRID TECHNOLOGIES CONFERENCE (ISGT), 2021,
  • [23] Exploring Explainable Artificial Intelligence Techniques for Interpretable Neural Networks in Traffic Sign Recognition Systems
    Khan, Muneeb A.
    Park, Heemin
    ELECTRONICS, 2024, 13 (02)
  • [24] Explainable Artificial Intelligence for Kids
    Alonso, Jose M.
    PROCEEDINGS OF THE 11TH CONFERENCE OF THE EUROPEAN SOCIETY FOR FUZZY LOGIC AND TECHNOLOGY (EUSFLAT 2019), 2019, 1 : 134 - 141
  • [25] Explainable Artificial Intelligence in education
    Khosravi H.
    Shum S.B.
    Chen G.
    Conati C.
    Tsai Y.-S.
    Kay J.
    Knight S.
    Martinez-Maldonado R.
    Sadiq S.
    Gašević D.
    Computers and Education: Artificial Intelligence, 2022, 3
  • [26] On the Need of an Explainable Artificial Intelligence
    Zanni-Merk, Cecilia
    INFORMATION SYSTEMS ARCHITECTURE AND TECHNOLOGY, ISAT 2019, PT I, 2020, 1050 : 3 - 3
  • [27] Explainable artificial intelligence in pathology
    Klauschen, Frederick
    Dippel, Jonas
    Keyl, Philipp
    Jurmeister, Philipp
    Bockmayr, Michael
    Mock, Andreas
    Buchstab, Oliver
    Alber, Maximilian
    Ruff, Lukas
    Montavon, Gregoire
    Mueller, Klaus-Robert
    PATHOLOGIE, 2024, 45 (02): : 133 - 139
  • [28] Explainable and Trustworthy Artificial Intelligence
    Alonso-Moral, Jose Maria
    Mencar, Corrado
    Ishibuchi, Hisao
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 14 - 15
  • [29] Review of Explainable Artificial Intelligence
    Zhao, Yanyu
    Zhao, Xiaoyong
    Wang, Lei
    Wang, Ningning
    Computer Engineering and Applications, 2023, 59 (14) : 1 - 14
  • [30] Explainable and responsible artificial intelligence
    Christian Meske
    Babak Abedin
    Mathias Klier
    Fethi Rabhi
    Electronic Markets, 2022, 32 : 2103 - 2106