A review and benchmark of feature importance methods for neural networks

被引:0
|
作者
Mandler, Hannes [1 ]
Weigand, Bernhard [1 ]
机构
[1] Univ Stuttgart, Inst Aerosp Thermodynam, Stuttgart, Germany
关键词
explainable artificial intelligence (XAI); interpretable machine learning; attribution method; feature importance; sensitivity analysis (SA); neural network; GLOBAL SENSITIVITY-ANALYSIS; EXPLAINABLE ARTIFICIAL-INTELLIGENCE; MEASURING UNCERTAINTY IMPORTANCE; MATHEMATICAL-MODELS; SOBOL INDEXES; BLACK-BOX; VARIABLES; DESIGN;
D O I
10.1145/3679012
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Feature attribution methods (AMs) are a simple means to provide explanations for the predictions of black- box models such as neural networks. Due to their conceptual differences, the numerous different methods, however, yield ambiguous explanations. While this allows for obtaining different insights into the model, it also complicates the decision regarding which method to adopt. This article summarizes the current state of the art regarding AMs, which includes the requirements and desiderata of the methods themselves as well as the properties of their explanations. Based on a survey of existing methods, a representative subset consisting of the delta-sensitivity index, permutation feature importance, variance-based feature importance in artificial neural networks and DeepSHAP, is described in greater detail and, for the first time, benchmarked in a regression context. Specifically for this purpose, a new verification strategy for model-specific AMs is proposed. As expected, the explanations' agreement with the intuition and among each other clearly depends on the AMs' properties. This has two implications. First, careful reasoning about the selection of an AM is required. Second, it is recommended to apply multiple AMs and combine their insights in order to reduce the model's opacity even further. CCS Concepts: center dot Computing methodologies- Causal reasoning and diagnostics; Neural networks; center dot Information systems- Decision support systems;
引用
收藏
页数:30
相关论文
共 50 条
  • [1] A Benchmark for Interpretability Methods in Deep Neural Networks
    Hooker, Sara
    Erhan, Dumitru
    Kindermans, Pieter-Jan
    Kim, Been
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [2] Measuring Feature Importance of Convolutional Neural Networks
    Zhang, Xiaohang
    Gao, Jiuyi
    IEEE ACCESS, 2020, 8 : 196062 - 196074
  • [3] Variance-Based Feature Importance in Neural Networks
    de Sa, Claudio Rebelo
    DISCOVERY SCIENCE (DS 2019), 2019, 11828 : 306 - 315
  • [4] A quantitative benchmark of neural network feature selection methods for detecting nonlinear signals
    Passemiers, Antoine
    Folco, Pietro
    Raimondi, Daniele
    Birolo, Giovanni
    Moreau, Yves
    Fariselli, Piero
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [5] Early Stabilizing Feature Importance for TensorFlow Deep Neural Networks
    Heaton, Jeff
    McElwee, Steven
    Fraley, James
    Cannady, James
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 4618 - 4624
  • [6] A Novel Feature Importance Based Layer to Improve Neural Networks
    Baydoun, Mohammed
    Ghaziri, Hassan
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Improvement of neural networks learning by feature extraction methods
    Kodors, Sergejs
    AICT 2013: APPLIED INFORMATION AND COMMUNICATION TECHNOLOGIES, 2013, : 43 - 47
  • [8] Ranking Feature-Block Importance in Artificial Multiblock Neural Networks
    Jenul, Anna
    Schrunner, Stefan
    Huynh, Bao Ngoc
    Helin, Runar
    Futsaether, Cecilia Marie
    Liland, Kristian Hovde
    Tomic, Oliver
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 163 - 175
  • [9] Feature perturbation augmentation for reliable evaluation of importance estimators in neural networks
    Brocki, Lennart
    Chung, Neo Christopher
    PATTERN RECOGNITION LETTERS, 2023, 176 : 131 - 139
  • [10] Sampling methods and feature selection for mortality prediction with neural networks
    Steinmeyer, Christian
    Wiese, Lena
    JOURNAL OF BIOMEDICAL INFORMATICS, 2020, 111