Feature Analysis Network: An Interpretable Idea in Deep Learning

被引:4
|
作者
Li, Xinyu [1 ]
Gao, Xiaoguang [1 ]
Wang, Qianglong [1 ]
Wang, Chenfeng [1 ]
Li, Bo [1 ]
Wan, Kaifang [1 ]
机构
[1] Northwestern Polytech Univ, Sch Elect & Informat, Xian, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Deep learning; Bayesian networks; Feature analysis; Correlation clustering; FAULT-DETECTION; ALGORITHM; MODELS; DIAGNOSIS;
D O I
10.1007/s12559-023-10238-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Learning (DL) stands out as a leading model for processing high-dimensional data, where the nonlinear transformation of hidden layers effectively extracts features. However, these unexplainable features make DL a low interpretability model. Conversely, Bayesian network (BN) is transparent and highly interpretable, and it can be helpful for interpreting DL. To improve the interpretability of DL from the perspective of feature cognition, we propose the feature analysis network (FAN), a DL structure fused with BN. FAN retains the DL feature extraction capability and applies BN as the output layer to learn the relationships between the features and the outputs. These relationships can be probabilistically represented by the structure and parameters of the BN, intuitively. In a further study, a correlation clustering-based feature analysis network (cc-FAN) is proposed to detect the correlations among inputs and to preserve this information to explain the features' physical meaning to a certain extent. To quantitatively evaluate the interpretability of the model, we design the network simplification and interpretability indicators separately. Experiments on eight datasets show that FAN has better interpretability than that of the other models with basically unchanged model accuracy and similar model complexities. On the radar effect mechanism dataset, from the feature structure-based relevance interpretability indicator, FAN is up to 4.8 times better than that of the other models, and cc-FAN is up to 21.5 times better than that of the other models. FAN and cc-FAN enhance the interpretability of the DL model structure from the aspects of features; moreover, based on the input correlations, cc-FAN can help us to better understand the physical meaning of features.
引用
收藏
页码:803 / 826
页数:24
相关论文
共 50 条
  • [1] Interpretable Feature Learning in Multivariate Big Data Analysis for Network Monitoring
    Camacho, Jose
    Wasielewska, Katarzyna
    Bro, Rasmus
    Kotz, David
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (03): : 2926 - 2943
  • [2] Deep Natural Language Feature Learning for Interpretable Prediction
    Urrutia, Felipe
    Buc, Cristian
    Barriere, Valentin
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3736 - 3763
  • [3] Automated Feature Document Review via Interpretable Deep Learning
    Ye, Ming
    Chen, Yuanfan
    Zhang, Xin
    He, Jinning
    Cao, Jicheng
    Liu, Dong
    Gao, Jing
    Dai, Hailiang
    Cheng, Shengyu
    2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS, ICSE-COMPANION, 2023, : 351 - 354
  • [4] Generalizable and Interpretable Deep Learning for Network Congestion Prediction
    Poularakis, Konstantinos
    Qin, Qiaofeng
    Le, Franck
    Kompella, Sastry
    Tassiulas, Leandros
    2021 IEEE 29TH INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP 2021), 2021,
  • [5] The Structure of Deep Neural Network for Interpretable Transfer Learning
    Kim, Dowan
    Lim, Woohyun
    Hong, Minye
    Kim, Hyeoncheol
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2019, : 181 - 184
  • [6] Predictive analysis and feature extraction of weld penetration in P-GMAW based on interpretable deep learning
    Pan, Yu
    Li, Chunkai
    Shi, Yu
    Dai, Yue
    Wang, Wenkai
    JOURNAL OF MANUFACTURING PROCESSES, 2024, 124 : 1506 - 1518
  • [7] Smarter water quality monitoring in reservoirs using interpretable deep learning models and feature importance analysis
    Majnooni, Shabnam
    Fooladi, Mahmood
    Nikoo, Mohammad Reza
    Al-Rawas, Ghazi
    Haghighi, Ali Torabi
    Nazari, Rouzbeh
    Al-Wardy, Malik
    Gandomi, Amir H.
    JOURNAL OF WATER PROCESS ENGINEERING, 2024, 60
  • [8] Visually Interpretable Fuzzy Neural Classification Network With Deep Convolutional Feature Maps
    Juang, Chia-Feng
    Cheng, Yun-Wei
    Lin, Yeh-Ming
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (03) : 1063 - 1077
  • [9] Interpretable Sentiment Analysis based on Deep Learning: An overview
    Jawale, Shila
    Sawarkar, S. D.
    2020 IEEE PUNE SECTION INTERNATIONAL CONFERENCE (PUNECON), 2020, : 65 - 70
  • [10] Interpretable Deep Learning for Spatial Analysis of Severe Hailstorms
    Gagne, David John, II
    Haupt, Sue Ellen
    Nychka, Douglas W.
    Thompson, Gregory
    MONTHLY WEATHER REVIEW, 2019, 147 (08) : 2827 - 2845