Boosting Human Competences With Interpretable and Explainable Artificial Intelligence

被引:2
|
作者
Herzog, Stefan M. [1 ]
Franklin, Matija [2 ]
机构
[1] Max Planck Inst Human Dev, Ctr Adapt Rat, Lentzealle 94, D-14195 Berlin, Germany
[2] UCL, Div Psychol & Language Sci, Causal Cognit Lab, London, England
来源
DECISION-WASHINGTON | 2024年 / 11卷 / 04期
关键词
explainable artificial intelligence; interpretable artificial intelligence; boosting; competences; knowledge; COGNITIVE FEEDBACK; METAANALYSIS; MODELS; BIAS;
D O I
10.1037/dec0000250
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Artificial intelligence (AI) is becoming integral to many areas of life, yet many-if not most-AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes-otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent-by design-faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people's competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and-because of XAI's drawbacks-preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.
引用
收藏
页码:493 / 510
页数:18
相关论文
共 50 条
  • [11] Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review
    Frasca M.
    La Torre D.
    Pravettoni G.
    Cutica I.
    Discov. Artif. Intell., 2024, 1 (1):
  • [12] Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for Human
    Mathew, Daniel Enemona
    Ebem, Deborah Uzoamaka
    Ikegwu, Anayo Chukwu
    Ukeoma, Pamela Eberechukwu
    Dibiaezue, Ngozi Fidelia
    NEURAL PROCESSING LETTERS, 2025, 57 (01)
  • [13] Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models
    Salih, Ahmed
    Galazzo, Ilaria Boscolo
    Gkontra, Polyxeni
    Lee, Aaron Mark
    Lekadir, Karim
    Raisi-Estabragh, Zahra
    Petersen, Steffen E.
    CIRCULATION-CARDIOVASCULAR IMAGING, 2023, 16 (04) : E014519
  • [14] Explainable artificial intelligence and interpretable machine learning for agricultural data analysis
    Ryo, Masahiro
    ARTIFICIAL INTELLIGENCE IN AGRICULTURE, 2022, 6 : 257 - 265
  • [15] An interpretable schizophrenia diagnosis framework using machine learning and explainable artificial intelligence
    Shivaprasad, Samhita
    Chadaga, Krishnaraj
    Dias, Cifha Crecil
    Sampathila, Niranjana
    Prabhu, Srikanth
    SYSTEMS SCIENCE & CONTROL ENGINEERING, 2024, 12 (01)
  • [16] Explainable artificial intelligence
    Wickramasinghe, Chathurika S.
    Marino, Daniel
    Amarasinghe, Kasun
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [17] CONFIDERAI: CONFormal Interpretable-by-Design score function for Explainable and Reliable Artificial Intelligence
    Narteni, Sara
    Carlevaro, Alberto
    Dabbene, Fabrizio
    Muselli, Marco
    Mongelli, Maurizio
    CONFORMAL AND PROBABILISTIC PREDICTION WITH APPLICATIONS, VOL 204, 2023, 204 : 485 - 487
  • [18] ExplaiNAble BioLogical Age (ENABL Age): an artificial intelligence framework for interpretable biological age
    Qiu, Wei
    Chen, Hugh
    Kaeberlein, Matt
    Lee, Su-In
    LANCET HEALTHY LONGEVITY, 2023, 4 (12): : E711 - E723
  • [19] Interpretable Prediction of a Decentralized Smart Grid Based on Machine Learning and Explainable Artificial Intelligence
    Cifci, Ahmet
    IEEE ACCESS, 2025, 13 : 36285 - 36305
  • [20] Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)
    Aslam, Nida
    Khan, Irfan Ullah
    Mirza, Samiha
    AlOwayed, Alanoud
    Anis, Fatima M.
    Aljuaid, Reef M.
    Baageel, Reham
    SUSTAINABILITY, 2022, 14 (12)