Distribution-dependent feature selection for deep neural networks

被引:1
|
作者
Zhao, Xuebin [1 ]
Li, Weifu [1 ]
Chen, Hong [1 ]
Wang, Yingjie [2 ]
Chen, Yanhong [3 ]
John, Vijay [4 ]
机构
[1] Huazhong Agr Univ, Coll Sci, Wuhan 430062, Peoples R China
[2] Huazhong Agr Univ, Coll Informat, Wuhan 430062, Peoples R China
[3] Chinese Acad Sci, Natl Space Sci Ctr, Beijing 100190, Peoples R China
[4] Toyota Technol Inst, Res Ctr Smart Vehicles, Tempaku Ku, 2-12-1 Hisakata, Nagoya, Aichi 4688511, Japan
基金
中国国家自然科学基金;
关键词
Feature selection; Coronal mass ejections; Deep neural networks; Interpretability; Hypothesis-testing; FALSE DISCOVERY RATE; REGRESSION; FILTER;
D O I
10.1007/s10489-021-02663-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While deep neural networks (DNNs) have achieved impressive performance on a wide variety of tasks, the black-box nature hinders their applicability to high-risk, decision-making fields. In such fields, besides accurate prediction, it is also desired to provide interpretable insights into DNNs, e.g., screening important features based on their contributions to predictive accuracy. To improve the interpretability of DNNs, this paper originally proposes a new feature selection algorithm for DNNs by integrating the knockoff technique and the distribution information of irrelevant features. With the help of knockoff features and central limit theorem, we state that the irrelevant feature's statistic follows a known Gaussian distribution under mild conditions. This information is applied in hypothesis testing to discover key features associated with the DNNs. Empirical evaluations on simulated data demonstrate that the proposed method can select more true informative features with higher F-1 scores. Meanwhile, the Friedman test and the post-hoc Nemenyi test are employed to validate the superiority of the proposed method. Then we apply our method to Coronal Mass Ejections (CME) data and uncover the key features which contribute to the DNN-based CME arrival time.
引用
收藏
页码:4432 / 4442
页数:11
相关论文
共 50 条
  • [21] Pruning Filter via Gaussian Distribution Feature for Deep Neural Networks Acceleration
    Xu, Jianrong
    Diao, Boyu
    Cui, Bifeng
    Yang, Kang
    Li, Chao
    Hong, Hailong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [22] Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods
    Fatemeh Farokhmanesh
    Mohammad Taghi Sadeghi
    Neural Processing Letters, 2021, 53 : 701 - 720
  • [23] Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods
    Farokhmanesh, Fatemeh
    Sadeghi, Mohammad Taghi
    NEURAL PROCESSING LETTERS, 2021, 53 (01) : 701 - 720
  • [24] A Distribution-Dependent Analysis of Meta-Learning
    Konobeev, Mikhail
    Kuzborskij, Ilja
    Szepesvari, Csaba
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [25] Selection dynamics for deep neural networks
    Liu, Hailiang
    Markowich, Peter
    JOURNAL OF DIFFERENTIAL EQUATIONS, 2020, 269 (12) : 11540 - 11574
  • [26] Distribution-dependent saccades in children with strabismus and in normals
    Zo Kapoula
    Maria Pia Bucci
    Experimental Brain Research, 2002, 143 : 264 - 268
  • [27] Distribution-dependent stochastic porous media equations
    Gao, Jingyue
    Hong, Wei
    Liu, Wei
    STOCHASTICS AND DYNAMICS, 2022, 22 (08)
  • [28] A sufficient condition for polynomial distribution-dependent learnability
    Anthony, M
    ShaweTaylor, J
    DISCRETE APPLIED MATHEMATICS, 1997, 77 (01) : 1 - 12
  • [29] Feature Selection Using Artificial Neural Networks
    Ledesma, Sergio
    Cerda, Gustavo
    Avina, Gabriel
    Hernandez, Donato
    Torres, Miguel
    MICAI 2008: ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2008, 5317 : 351 - 359
  • [30] Feature Selection and Extraction for Graph Neural Networks
    Acharya, Deepak Bhaskar
    Zhang, Huaming
    ACMSE 2020: PROCEEDINGS OF THE 2020 ACM SOUTHEAST CONFERENCE, 2020, : 252 - 255