Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity

被引:2
|
作者
Xu, Shiyun [1 ]
Bu, Zhiqi [1 ]
Chaudhari, Pratik [2 ]
Barnett, Ian J. [3 ]
机构
[1] Univ Penn, Dept Appl Math & Computat Sci, Philadelphia, PA 19104 USA
[2] Univ Penn, Dept Elect & Syst Engn, Philadelphia, PA 19104 USA
[3] Univ Penn, Dept Biostat Epidemiol & Informat, Philadelphia, PA 19104 USA
基金
美国国家科学基金会;
关键词
Interpretability; Additive Models; Group LASSO; Feature Selection; VARIABLE SELECTION; LASSO; REGRESSION; SHRINKAGE;
D O I
10.1007/978-3-031-43418-1_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Interpretable machine learning has demonstrated impressive performance while preserving explainability. In particular, neural additive models (NAM) offer the interpretability to the black-box deep learning and achieve state-of-the-art accuracy among the large family of generalized additive models. In order to empower NAM with feature selection and improve the generalization, we propose the sparse neural additive models (SNAM) that employ the group sparsity regularization (e.g. Group LASSO), where each feature is learned by a sub-network whose trainable parameters are clustered as a group. We study the theoretical properties for SNAM with novel techniques to tackle the non-parametric truth, thus extending from classical sparse linear models such as the LASSO, which only works on the parametric truth. Specifically, we show that SNAM with subgradient and proximal gradient descents provably converges to zero training loss as t -> infinity, and that the estimation error of SNAM vanishes asymptotically as n -> infinity. We also prove that SNAM, similar to LASSO, can have exact support recovery, i.e. perfect feature selection, with appropriate regularization. Moreover, we show that the SNAM can generalize well and preserve the 'identifiability', recovering each feature's effect. We validate our theories via extensive experiments and further testify to the good accuracy and efficiency of SNAM (Appendix can be found at https://arxiv.org/abs/2202.12482.).
引用
收藏
页码:343 / 359
页数:17
相关论文
共 50 条
  • [31] Dictionary learning for unsupervised feature selection via dual sparse regression
    Jian-Sheng Wu
    Jing-Xin Liu
    Jun-Yun Wu
    Wei Huang
    Applied Intelligence, 2023, 53 : 18840 - 18856
  • [32] Neural Additive Models: Interpretable Machine Learning with Neural Nets
    Agarwal, Rishabh
    Melnick, Levi
    Frosst, Nicholas
    Zhang, Xuezhou
    Lengerich, Ben
    Caruana, Rich
    Hinton, Geoffrey E.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Interpretable Remaining Useful Life Prediction Based on Causal Feature Selection and Deep Learning
    Li, Min
    Luo, Meiling
    Ke, Ting
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14878 : 148 - 160
  • [34] Multi-label Robust Feature Selection via Subspace-Sparsity Learning
    Zhou, Yunya
    Yuan, Bin
    Zhong, Yan
    Li, Yuling
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT I, 2024, 15016 : 3 - 17
  • [35] Efficient nonconvex sparse group feature selection via continuous and discrete optimization
    Xiang, Shuo
    Shen, Xiaotong
    Ye, Jieping
    ARTIFICIAL INTELLIGENCE, 2015, 224 : 28 - 50
  • [36] Study on Feature Selection and Feature Deep Learning Model For Big Data
    Yu, Ping
    Yan, Hui
    2018 3RD INTERNATIONAL CONFERENCE ON SMART CITY AND SYSTEMS ENGINEERING (ICSCSE), 2018, : 792 - 795
  • [37] Group Selection and Shrinkage: Structured Sparsity for Semiparametric Additive Models
    Thompson, Ryan
    Vahid, Farshid
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2024, 33 (04) : 1286 - 1297
  • [38] UDSFS: Unsupervised deep sparse feature selection
    Cong, Yang
    Wang, Shuai
    Fan, Baojie
    Yang, Yunsheng
    Yu, Haibin
    NEUROCOMPUTING, 2016, 196 : 150 - 158
  • [39] Deep-gKnock: Nonlinear group-feature selection with deep neural networks
    Zhu, Guangyu
    Zhao, Tingting
    NEURAL NETWORKS, 2021, 135 : 139 - 147
  • [40] Learning Structured Sparsity in Deep Neural Networks
    Wen, Wei
    Wu, Chunpeng
    Wang, Yandan
    Chen, Yiran
    Li, Hai
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29