Explainable AI: A review of applications to neuroimaging data

被引:20
|
作者
Farahani, Farzad V. [1 ,2 ]
Fiok, Krzysztof [2 ]
Lahijanian, Behshad [3 ,4 ]
Karwowski, Waldemar [2 ]
Douglas, Pamela K. [5 ]
机构
[1] Johns Hopkins Univ, Dept Biostat, Baltimore, MD 21218 USA
[2] Univ Cent Florida, Dept Ind Engn & Management Syst, Orlando, FL 32816 USA
[3] Univ Florida, Dept Ind & Syst Engn, Gainesville, FL USA
[4] Georgia Inst Technol, H Milton Stewart Sch Ind & Syst Engn, Atlanta, GA USA
[5] Univ Cent Florida, Sch Modeling Simulat & Training, Orlando, FL USA
关键词
explainable AI; interpretability; artificial intelligence (AI); deep learning; neural networks; medical imaging; neuroimaging; SUPPORT VECTOR MACHINE; DEEP NEURAL-NETWORKS; ARTIFICIAL-INTELLIGENCE; FEATURE-SELECTION; CLASSIFICATION; TRANSPARENCY; DISEASES; VISION; IMPACT; CANCER;
D O I
10.3389/fnins.2022.906290
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the "black box " and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
引用
收藏
页数:26
相关论文
共 50 条
  • [21] A comprehensive review of explainable AI for disease diagnosis
    Biswas, Al Amin
    ARRAY, 2024, 22
  • [22] A review of explainable AI in the satellite data, deep machine learning, and human poverty domain
    Hall, Ola
    Ohlsson, Mattias
    Rognvaldsson, Thorsteinn
    PATTERNS, 2022, 3 (10):
  • [23] Privacy-Preserving and Explainable AI in Industrial Applications
    Ogrezeanu, Iulian
    Vizitiu, Anamaria
    Ciusdel, Costin
    Puiu, Andrei
    Coman, Simona
    Boldisor, Cristian
    Itu, Alina
    Demeter, Robert
    Moldoveanu, Florin
    Suciu, Constantin
    Itu, Lucian
    APPLIED SCIENCES-BASEL, 2022, 12 (13):
  • [24] ScanSavant: Malware Detection for Android Applications with Explainable AI
    Navaneethan, S.
    Udhaya Kumar, S.
    International Journal of Interactive Mobile Technologies, 2024, 18 (19) : 171 - 181
  • [25] Promising AI Applications in Power Systems: Explainable AI (XAI), Transformers, LLMs
    Lukianykhin, Oleh
    Shendryk, Vira
    Shendryk, Sergii
    Malekian, Reza
    NEW TECHNOLOGIES, DEVELOPMENT AND APPLICATION VII, VOL 2, NT-2024, 2024, 1070 : 66 - 76
  • [26] Explainable AI
    Veerappa, Manjunatha
    Rinzivillo, Salvo
    ERCIM NEWS, 2023, (134):
  • [27] Explainable AI
    Anna, Monreale
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2019, 319 : 5 - 5
  • [28] Explainable AI
    Schmid, Ute
    Wrede, Britta
    KUNSTLICHE INTELLIGENZ, 2022, 36 (3-4): : 207 - 210
  • [29] Explainable AI
    Ute Schmid
    Britta Wrede
    KI - Künstliche Intelligenz, 2022, 36 : 207 - 210
  • [30] Explainable AI
    Matsuo T.
    Todoriki M.
    Tago S.-I.
    Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers, 2020, 74 (01): : 30 - 34