A review and benchmark of feature importance methods for neural networks

被引:0
|
作者
Mandler, Hannes [1 ]
Weigand, Bernhard [1 ]
机构
[1] Univ Stuttgart, Inst Aerosp Thermodynam, Stuttgart, Germany
关键词
explainable artificial intelligence (XAI); interpretable machine learning; attribution method; feature importance; sensitivity analysis (SA); neural network; GLOBAL SENSITIVITY-ANALYSIS; EXPLAINABLE ARTIFICIAL-INTELLIGENCE; MEASURING UNCERTAINTY IMPORTANCE; MATHEMATICAL-MODELS; SOBOL INDEXES; BLACK-BOX; VARIABLES; DESIGN;
D O I
10.1145/3679012
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Feature attribution methods (AMs) are a simple means to provide explanations for the predictions of black- box models such as neural networks. Due to their conceptual differences, the numerous different methods, however, yield ambiguous explanations. While this allows for obtaining different insights into the model, it also complicates the decision regarding which method to adopt. This article summarizes the current state of the art regarding AMs, which includes the requirements and desiderata of the methods themselves as well as the properties of their explanations. Based on a survey of existing methods, a representative subset consisting of the delta-sensitivity index, permutation feature importance, variance-based feature importance in artificial neural networks and DeepSHAP, is described in greater detail and, for the first time, benchmarked in a regression context. Specifically for this purpose, a new verification strategy for model-specific AMs is proposed. As expected, the explanations' agreement with the intuition and among each other clearly depends on the AMs' properties. This has two implications. First, careful reasoning about the selection of an AM is required. Second, it is recommended to apply multiple AMs and combine their insights in order to reduce the model's opacity even further. CCS Concepts: center dot Computing methodologies- Causal reasoning and diagnostics; Neural networks; center dot Information systems- Decision support systems;
引用
收藏
页数:30
相关论文
共 50 条
  • [31] Review on Methods to Fix Number of Hidden Neurons in Neural Networks
    Sheela, K. Gnana
    Deepa, S. N.
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2013, 2013
  • [32] Image Inpainting Methods Based on Deep Neural Networks: A Review
    Li Y.-L.
    Gao Y.
    Yan J.-L.
    Zou B.-H.
    Wang J.-M.
    Jisuanji Xuebao/Chinese Journal of Computers, 2021, 44 (11): : 2295 - 2316
  • [33] Interpretability of deep neural networks: A review of methods, classification and hardware
    Antamis, Thanasis
    Drosou, Anastasis
    Vafeiadis, Thanasis
    Nizamis, Alexandros
    Ioannidis, Dimosthenis
    Tzovaras, Dimitrios
    NEUROCOMPUTING, 2024, 601
  • [34] Correlation Aided Neural Networks: A Correlation Based Approach of Using Feature Importance to Improve Performance
    Al Iqbal, Ridwan
    2012 INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV), 2012, : 70 - 75
  • [35] Adaptable Feature Importance Estimation Framework for Fusion-based Multimodal Deep Neural Networks
    Azmat, Muneeza
    Fessler, Henry
    Alessio, Adam
    JOURNAL OF NUCLEAR MEDICINE, 2023, 64
  • [36] Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set
    Eelke B. Lenselink
    Niels ten Dijke
    Brandon Bongers
    George Papadatos
    Herman W. T. van Vlijmen
    Wojtek Kowalczyk
    Adriaan P. IJzerman
    Gerard J. P. van Westen
    Journal of Cheminformatics, 9
  • [37] Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set
    Lenselink, Eelke B.
    ten Dijke, Niels
    Bongers, Brandon
    Papadatos, George
    van Vlijmen, Herman W. T.
    Kowalczyk, Wojtek
    IJzerman, Adriaan P.
    van Westen, Gerard J. P.
    JOURNAL OF CHEMINFORMATICS, 2017, 9
  • [38] Fuzzy rough sets and fuzzy rough neural networks for feature selection: A review
    Ji, Wanting
    Pang, Yan
    Jia, Xiaoyun
    Wang, Zhongwei
    Hou, Feng
    Song, Baoyan
    Liu, Mingzhe
    Wang, Ruili
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (03)
  • [39] The Nebula Benchmark Suite: Implications of Lightweight Neural Networks
    Kim, Bogil
    Lee, Sungjae
    Park, Chanho
    Kim, Hyeonjin
    Song, William J.
    IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (11) : 1887 - 1900
  • [40] Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods
    Farokhmanesh, Fatemeh
    Sadeghi, Mohammad Taghi
    NEURAL PROCESSING LETTERS, 2021, 53 (01) : 701 - 720