Feature Analysis Network: An Interpretable Idea in Deep Learning

被引:4
|
作者
Li, Xinyu [1 ]
Gao, Xiaoguang [1 ]
Wang, Qianglong [1 ]
Wang, Chenfeng [1 ]
Li, Bo [1 ]
Wan, Kaifang [1 ]
机构
[1] Northwestern Polytech Univ, Sch Elect & Informat, Xian, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Deep learning; Bayesian networks; Feature analysis; Correlation clustering; FAULT-DETECTION; ALGORITHM; MODELS; DIAGNOSIS;
D O I
10.1007/s12559-023-10238-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Learning (DL) stands out as a leading model for processing high-dimensional data, where the nonlinear transformation of hidden layers effectively extracts features. However, these unexplainable features make DL a low interpretability model. Conversely, Bayesian network (BN) is transparent and highly interpretable, and it can be helpful for interpreting DL. To improve the interpretability of DL from the perspective of feature cognition, we propose the feature analysis network (FAN), a DL structure fused with BN. FAN retains the DL feature extraction capability and applies BN as the output layer to learn the relationships between the features and the outputs. These relationships can be probabilistically represented by the structure and parameters of the BN, intuitively. In a further study, a correlation clustering-based feature analysis network (cc-FAN) is proposed to detect the correlations among inputs and to preserve this information to explain the features' physical meaning to a certain extent. To quantitatively evaluate the interpretability of the model, we design the network simplification and interpretability indicators separately. Experiments on eight datasets show that FAN has better interpretability than that of the other models with basically unchanged model accuracy and similar model complexities. On the radar effect mechanism dataset, from the feature structure-based relevance interpretability indicator, FAN is up to 4.8 times better than that of the other models, and cc-FAN is up to 21.5 times better than that of the other models. FAN and cc-FAN enhance the interpretability of the DL model structure from the aspects of features; moreover, based on the input correlations, cc-FAN can help us to better understand the physical meaning of features.
引用
收藏
页码:803 / 826
页数:24
相关论文
共 50 条
  • [21] Network Traffic Feature Engineering Based on Deep Learning
    Wang, Kai
    Chen, Liyun
    Wang, Shuai
    Wang, Zengguang
    3RD ANNUAL INTERNATIONAL CONFERENCE ON INFORMATION SYSTEM AND ARTIFICIAL INTELLIGENCE (ISAI2018), 2018, 1069
  • [22] Interpretable deep learning methods for multiview learning
    Wang, Hengkang
    Lu, Han
    Sun, Ju
    Safo, Sandra E.
    BMC BIOINFORMATICS, 2024, 25 (01)
  • [23] Towards Interpretable Deep Learning: A Feature Selection Framework for Prognostics and Health Management Using Deep Neural Networks
    Barraza, Joaquin Figueroa
    Droguett, Enrique Lopez
    Martins, Marcelo Ramos
    SENSORS, 2021, 21 (17)
  • [24] Dictionary Learning-Guided Deep Interpretable Network for Hyperspectral Change Detection
    Zhao, Jingyu
    Xiao, Song
    Dong, Wenqian
    Qu, Jiahui
    Li, Yunsong
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [25] Performance analysis and feature selection for network-based intrusion detection with deep learning
    Caner, Serhat
    Erdogmus, Nesli
    Erten, Y. Murat
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2022, 30 (03) : 629 - 643
  • [26] A method for sperm activity analysis based on feature point detection network in deep learning
    Chen, Zhong
    Yang, Jinkun
    Luo, Chen
    Zhang, Changheng
    FRONTIERS IN COMPUTER SCIENCE, 2022, 4
  • [27] Acoustic Emission Signal Classification Using Feature Analysis and Deep Learning Neural Network
    Wu, Jian-Da
    Wong, Yu-Han
    Luo, Wen-Jun
    Yao, Kai-Chao
    FLUCTUATION AND NOISE LETTERS, 2021, 20 (03):
  • [28] Small object detection using deep feature learning and feature fusion network
    Tong, Kang
    Wu, Yiquan
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 132
  • [29] Feature Learning for Interpretable, Performant Decision Trees
    Good, Jack H.
    Kovach, Torin
    Miller, Kyle
    Dubrawski, Artur
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [30] A SUPERVISED ONLINE CROWD ANALYSIS NETWORK WITH DUAL-TASK DEEP LEARNING IDEA IN HEALTHCARE APPLICATION
    Wang, Junli
    Leng, Wenhao
    Wang, Shitong
    Zhang, Tao
    Jin, Jiali
    JOURNAL OF MECHANICS IN MEDICINE AND BIOLOGY, 2024,