Boosting Human Competences With Interpretable and Explainable Artificial Intelligence

被引:2
|
作者
Herzog, Stefan M. [1 ]
Franklin, Matija [2 ]
机构
[1] Max Planck Inst Human Dev, Ctr Adapt Rat, Lentzealle 94, D-14195 Berlin, Germany
[2] UCL, Div Psychol & Language Sci, Causal Cognit Lab, London, England
来源
DECISION-WASHINGTON | 2024年 / 11卷 / 04期
关键词
explainable artificial intelligence; interpretable artificial intelligence; boosting; competences; knowledge; COGNITIVE FEEDBACK; METAANALYSIS; MODELS; BIAS;
D O I
10.1037/dec0000250
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Artificial intelligence (AI) is becoming integral to many areas of life, yet many-if not most-AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes-otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent-by design-faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people's competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and-because of XAI's drawbacks-preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.
引用
收藏
页码:493 / 510
页数:18
相关论文
共 50 条
  • [1] Explainable Artificial Intelligence for Interpretable Data Minimization
    Becker, Maximilian
    Toprak, Emrah
    Beyerer, Juergen
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 885 - 893
  • [2] IFC-BD: An Interpretable Fuzzy Classifier for Boosting Explainable Artificial Intelligence in Big Data
    Aghaeipoor, Fatemeh
    Javidi, Mohammad Masoud
    Fernandez, Alberto
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2022, 30 (03) : 830 - 840
  • [3] Cybertrust: From Explainable to Actionable and Interpretable Artificial Intelligence
    Linkov, Igor
    Galaitsi, Stephanie
    Trump, Benjamin D.
    Keisler, Jeffrey M.
    Kott, Alexander
    COMPUTER, 2020, 53 (09) : 91 - 96
  • [4] A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications
    Basagaoglu, Hakan
    Chakraborty, Debaditya
    Do Lago, Cesar
    Gutierrez, Lilianna
    Sahinli, Mehmet Arif
    Giacomoni, Marcio
    Furl, Chad
    Mirchi, Ali
    Moriasi, Daniel
    Sengor, Sema Sevinc
    WATER, 2022, 14 (08)
  • [5] Toward Explainable and Interpretable Building Energy Modelling: An Explainable Artificial Intelligence Approach
    Zhang, Wei
    Liu, Fang
    Wen, Yonggang
    Nee, Bernard
    BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 255 - 258
  • [6] Guest Editorial: New Developments in Explainable and Interpretable Artificial Intelligence
    Subbalakshmi K.P.S.
    Samek W.
    Hu X.B.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1427 - 1428
  • [7] Special issue on “Towards robust explainable and interpretable artificial intelligence”
    Stefania Tomasiello
    Feng Feng
    Yichuan Zhao
    Evolutionary Intelligence, 2024, 17 : 417 - 418
  • [8] Special issue on "Towards robust explainable and interpretable artificial intelligence"
    Tomasiello, Stefania
    Feng, Feng
    Zhao, Yichuan
    EVOLUTIONARY INTELLIGENCE, 2024, 17 (1) : 417 - 418
  • [9] Explainable vs. interpretable artificial intelligence frameworks in oncology
    Bertsimas, Dimitris
    Margonis, Georgios Antonios
    TRANSLATIONAL CANCER RESEARCH, 2023, 12 (02) : 217 - 220
  • [10] Automated and Interpretable Fake News Detection With Explainable Artificial Intelligence
    Giri, Moyank
    Eswaran, Sivaraman
    Honnavalli, Prasad
    Daniel, D.
    JOURNAL OF APPLIED SECURITY RESEARCH, 2024,