The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

被引:0
|
作者
Thompson, Ryan [1 ,2 ]
Dezfouli, Amir [3 ]
Kohn, Robert [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIROs Data61, Eveleigh, Australia
[3] BIMLOGIQ, Sydney, NSW, Australia
关键词
REGRESSION; REGULARIZATION; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse linear models are one of several core tools for interpretable machine learning, a field of emerging importance as predictive models permeate decision-making in many domains. Unfortunately, sparse linear models are far less flexible as functions of their input features than black-box models like deep neural networks. With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects. This dichotomy leads us to the contextual lasso, a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features. The fitting process learns this function nonparametrically via a deep neural network. To attain sparse coefficients, we train the network with a novel lasso regularizer in the form of a projection layer that maps the network's output onto the space of l(1)-constrained linear models. An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso without sacrificing the predictive power of a standard deep neural network.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Robustness Verification of Classification Deep Neural Networks via Linear Programming
    Lin, Wang
    Yang, Zhengfeng
    Chen, Xin
    Zhao, Qingye
    Li, Xiangkun
    Liu, Zhiming
    He, Jifeng
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11410 - 11419
  • [22] Accelerating Distributed Inference of Sparse Deep Neural Networks via Mitigating the Straggler Effect
    Mofrad, Mohammad Hasanzadeh
    Melhem, Rami
    Ahmad, Yousuf
    Hammoud, Mohammad
    2020 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2020,
  • [23] Detection of interacting variables for generalized linear models via neural networks
    Havrylenko, Yevhen
    Heger, Julia
    EUROPEAN ACTUARIAL JOURNAL, 2024, 14 (02) : 551 - 580
  • [24] Estimating sparse models from multivariate discrete data via transformed Lasso
    Roos, Teemu
    Yu, Bin
    2009 INFORMATION THEORY AND APPLICATIONS WORKSHOP, 2009, : 287 - +
  • [25] Structured Compression of Deep Neural Networks with Debiased Elastic Group LASSO
    Oyedotun, Oyebade K.
    Aouada, Djamila
    Ottersten, Bjoern
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 2266 - 2275
  • [26] Cascade Deep Networks for Sparse Linear Inverse Problems
    Zhang, Huan
    Shi, Hong
    Wang, Wenwu
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 812 - 817
  • [27] Leveraging Sparse Linear Layers for Debuggable Deep Networks
    Wong, Eric
    Santurkar, Shibani
    Madry, Aleksander
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [28] DEEP SPARSE RECTIFIER NEURAL NETWORKS FOR SPEECH DENOISING
    Xu, Lie
    Choy, Chiu-Sing
    Li, Yi-Wen
    2016 IEEE INTERNATIONAL WORKSHOP ON ACOUSTIC SIGNAL ENHANCEMENT (IWAENC), 2016,
  • [29] PHONE RECOGNITION WITH DEEP SPARSE RECTIFIER NEURAL NETWORKS
    Toth, Laszlo
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6985 - 6989
  • [30] Performance of Training Sparse Deep Neural Networks on GPUs
    Wang, Jianzong
    Huang, Zhangcheng
    Kong, Lingwei
    Xiao, Jing
    Wang, Pengyu
    Zhang, Lu
    Li, Chao
    2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,