Boosting Human Competences With Interpretable and Explainable Artificial Intelligence

被引:2
|
作者
Herzog, Stefan M. [1 ]
Franklin, Matija [2 ]
机构
[1] Max Planck Inst Human Dev, Ctr Adapt Rat, Lentzealle 94, D-14195 Berlin, Germany
[2] UCL, Div Psychol & Language Sci, Causal Cognit Lab, London, England
来源
DECISION-WASHINGTON | 2024年 / 11卷 / 04期
关键词
explainable artificial intelligence; interpretable artificial intelligence; boosting; competences; knowledge; COGNITIVE FEEDBACK; METAANALYSIS; MODELS; BIAS;
D O I
10.1037/dec0000250
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Artificial intelligence (AI) is becoming integral to many areas of life, yet many-if not most-AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes-otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent-by design-faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people's competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and-because of XAI's drawbacks-preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.
引用
收藏
页码:493 / 510
页数:18
相关论文
共 50 条
  • [41] Application of Artificial Intelligence in Healthcare: The Need for More Interpretable Artificial Intelligence
    Tavares, Jorge
    ACTA MEDICA PORTUGUESA, 2024, 37 (06) : 411 - 414
  • [42] Boosting a Bridge Artificial Intelligence
    Ventos, Veronique
    Costel, Yves
    Teytaud, Olivier
    Ventos, Solene Thepaut
    2017 IEEE 29TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2017), 2017, : 1280 - 1287
  • [43] Should artificial intelligence be interpretable to humans?
    Schwartz, Matthew D.
    NATURE REVIEWS PHYSICS, 2022, 4 (12) : 741 - 742
  • [44] Memristive Explainable Artificial Intelligence Hardware
    Song, Hanchan
    Park, Woojoon
    Kim, Gwangmin
    Choi, Moon Gu
    In, Jae Hyun
    Rhee, Hakseung
    Kim, Kyung Min
    ADVANCED MATERIALS, 2024, 36 (25)
  • [45] Effects of Explainable Artificial Intelligence in Neurology
    Gombolay, G.
    Silva, A.
    Schrum, M.
    Dutt, M.
    Hallman-Cooper, J.
    Gombolay, M.
    ANNALS OF NEUROLOGY, 2023, 94 : S145 - S145
  • [46] Drug discovery with explainable artificial intelligence
    Jimenez-Luna, Jose
    Grisoni, Francesca
    Schneider, Gisbert
    NATURE MACHINE INTELLIGENCE, 2020, 2 (10) : 573 - 584
  • [47] Explainable Artificial Intelligence for Combating Cyberbullying
    Tesfagergish, Senait Gebremichael
    Damasevicius, Robertas
    SOFT COMPUTING AND ITS ENGINEERING APPLICATIONS, PT 1, ICSOFTCOMP 2023, 2024, 2030 : 54 - 67
  • [48] Drug discovery with explainable artificial intelligence
    José Jiménez-Luna
    Francesca Grisoni
    Gisbert Schneider
    Nature Machine Intelligence, 2020, 2 : 573 - 584
  • [49] Interpretable Artificial Intelligence: Why and When
    Ghosh, Adarsh
    Kandasamy, Devasenathipathy
    AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 214 (05) : 1137 - 1138
  • [50] Explainable and responsible artificial intelligence PREFACE
    Meske, Christian
    Abedin, Babak
    Klier, Mathias
    Rabhi, Fethi
    ELECTRONIC MARKETS, 2022, 32 (04) : 2103 - 2106