Explainable Deep Learning in Spectral and Medical Image Analysis

被引:1
|
作者
Liu, Xuyang [1 ]
Duan, Chaoshu [1 ]
Cai, Wensheng [1 ,2 ]
Shao, Xueguang [1 ,2 ]
机构
[1] Nankai Univ, State Key Lab Med Chem Biol, Tianjin Key Lab Biosensing & Mol Recognit, Res Ctr Analyt Sci,Coll Chem, Tianjin 300071, Peoples R China
[2] Haihe Lab Sustainable Chem Transformat, Tianjin 300192, Peoples R China
基金
中国国家自然科学基金;
关键词
deep learning; interpretability method; neural network; medical image analysis; spectral analysis; MULTIVARIATE CALIBRATION; VARIABLE SELECTION; ALGORITHM; MODEL;
D O I
10.7536/PC220512
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Deep learning is a modeling method based on neural network, which is constructed of multiple different functional perception layers and optimized by learning the inherent regularity of a large amount of data to achieve end-to-end modeling. The growth of data and the improvement of computing power promoted the applications of deep learning in spectral and medical image analysis. The lack of interpretability of the constructed models, however, constitutes an obstacle to their further development and applications. To overcome this obstacle of deep learning, various interpretability methods are proposed. According to different principles of explanation, interpretability methods are divided into three categories: visualization methods, model distillation, and interpretable models. Visualization methods and model distillation belong to external algorithms, which interpret a model without changing its structure, while interpretable models aim to make the model structure interpretable. In this review, the principles of deep learning and three interpretability methods are introduced from the perspective of algorithms. Moreover, the applications of the interpretability methods in spectral and medical image analysis in the past three years are summarized. In most studies, external algorithms were developed to make the models explainable, and these methods were found to be able to provide reasonable explanation for the abilities of the deep learning models. However, few studies attempt to construct interpretable algorithms within networks. Furthermore, most studies try to train the model through collecting large amounts of labeled data, which leads to huge costs in both labor and expenses. Therefore, training strategies with small data sets, approaches to enhance the interpretability of models, and the construction of interpretable deep learning architectures are still required in future work
引用
收藏
页码:2561 / 2572
页数:12
相关论文
共 67 条
  • [1] Alvarez-Melis D, 2018, Arxiv, DOI arXiv:1806.07538
  • [2] [Anonymous], 1997, SCHMIDHUBER J NEURAL
  • [3] Badrinarayanan V., 2015, COMPUT SCI
  • [4] Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
  • [5] Variable space boosting partial least squares for multivariate calibration of near-infrared spectroscopy
    Bian, Xihui
    Li, Shujuan
    Shao, Xueguang
    Liu, Peng
    [J]. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2016, 158 : 174 - 179
  • [6] Deep Learning-Based Segmentation and Quantification in Experimental Kidney Histopathology
    Bouteldja, Nassim
    Klinkhammer, Barbara M.
    Buelow, Roman D.
    Droste, Patrick
    Otten, Simon W.
    Freifrau von Stillfried, Saskia
    Moellmann, Julia
    Sheehan, Susan M.
    Korstanje, Ron
    Menzel, Sylvia
    Bankhead, Peter
    Mietsch, Matthias
    Drummer, Charis
    Lehrke, Michael
    Kramann, Rafael
    Floege, Juergen
    Boor, Peter
    Merhof, Dorit
    [J]. JOURNAL OF THE AMERICAN SOCIETY OF NEPHROLOGY, 2021, 32 (01): : 52 - 68
  • [7] A variable selection method based on uninformative variable elimination for multivariate calibration of near-infrared spectra
    Cai, Wensheng
    Li, Yankun
    Shao, Xueguang
    [J]. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2008, 90 (02) : 188 - 194
  • [8] Cho K, 2014, On the properties of neural machine translation: Encoder-decoder approaches, DOI [DOI 10.3115/V1/W14-4012, 10.3115/v1/w14-4012]
  • [9] Interpretable Explanations of Black Boxes by Meaningful Perturbation
    Fong, Ruth C.
    Vedaldi, Andrea
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3449 - 3457
  • [10] Frosst N, 2017, ARXIV