Explainable AI: A review of applications to neuroimaging data

被引:20
|
作者
Farahani, Farzad V. [1 ,2 ]
Fiok, Krzysztof [2 ]
Lahijanian, Behshad [3 ,4 ]
Karwowski, Waldemar [2 ]
Douglas, Pamela K. [5 ]
机构
[1] Johns Hopkins Univ, Dept Biostat, Baltimore, MD 21218 USA
[2] Univ Cent Florida, Dept Ind Engn & Management Syst, Orlando, FL 32816 USA
[3] Univ Florida, Dept Ind & Syst Engn, Gainesville, FL USA
[4] Georgia Inst Technol, H Milton Stewart Sch Ind & Syst Engn, Atlanta, GA USA
[5] Univ Cent Florida, Sch Modeling Simulat & Training, Orlando, FL USA
关键词
explainable AI; interpretability; artificial intelligence (AI); deep learning; neural networks; medical imaging; neuroimaging; SUPPORT VECTOR MACHINE; DEEP NEURAL-NETWORKS; ARTIFICIAL-INTELLIGENCE; FEATURE-SELECTION; CLASSIFICATION; TRANSPARENCY; DISEASES; VISION; IMPACT; CANCER;
D O I
10.3389/fnins.2022.906290
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the "black box " and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] The challenges of explainable AI in biomedical data science INTRODUCTION
    Han, Henry
    Liu, Xiangrong
    BMC BIOINFORMATICS, 2022, 22 (SUPPL 12)
  • [32] DEEPCUBE: Explainable AI Pipelines for Big Copernicus Data
    Gervasi, Chiara
    Ferrari, Alessia
    Papoutsis, Ioannis
    Touloumtzi, Souzana
    GEOMEDIA, 2021, 25 (03) : 26 - 29
  • [33] A Novel Explainable AI Model for Medical Data Analysis
    Shakhovska, Nataliya
    Shebeko, Andrii
    Prykarpatskyy, Yarema
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2024, 14 (02) : 121 - 137
  • [34] Review of Human-Centered Explainable AI in Healthcare
    Song, Shuchao
    Chen, Yiqiang
    Yu, Hanchao
    Zhang, Yingwei
    Yang, Xiaodong
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2024, 36 (05): : 645 - 657
  • [35] Explainable AI: A Review of Machine Learning Interpretability Methods
    Linardatos, Pantelis
    Papastefanopoulos, Vasilis
    Kotsiantis, Sotiris
    ENTROPY, 2021, 23 (01) : 1 - 45
  • [36] Making AI Explainable in the Global South: A Systematic Review
    Okolo, Chinasa T.
    Dell, Nicola
    Vashistha, Aditya
    PROCEEDINGS OF THE 4TH ACM SIGCAS/SIGCHI CONFERENCE ON COMPUTING AND SUSTAINABLE SOCIETIES, COMPASS'22, 2022, : 439 - 452
  • [37] Is explainable AI responsible AI?
    Taylor, Isaac
    AI & SOCIETY, 2024, 40 (3) : 1695 - 1704
  • [38] ExMed: An AI Tool for Experimenting Explainable AI Techniques on Medical Data Analytics
    Kapcia, Marcin
    Eshkiki, Hassan
    Duell, Jamie
    Fan, Xiuyi
    Zhou, Shangming
    Mora, Benjamin
    2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 841 - 845
  • [39] When explainable AI meets IoT applications for supervised learning
    Youcef Djenouri
    Asma Belhadi
    Gautam Srivastava
    Jerry Chun-Wei Lin
    Cluster Computing, 2023, 26 : 2313 - 2323
  • [40] When explainable AI meets IoT applications for supervised learning
    Djenouri, Youcef
    Belhadi, Asma
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2023, 26 (04): : 2313 - 2323