The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

被引:0
|
作者
Thompson, Ryan [1 ,2 ]
Dezfouli, Amir [3 ]
Kohn, Robert [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIROs Data61, Eveleigh, Australia
[3] BIMLOGIQ, Sydney, NSW, Australia
关键词
REGRESSION; REGULARIZATION; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse linear models are one of several core tools for interpretable machine learning, a field of emerging importance as predictive models permeate decision-making in many domains. Unfortunately, sparse linear models are far less flexible as functions of their input features than black-box models like deep neural networks. With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects. This dichotomy leads us to the contextual lasso, a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features. The fitting process learns this function nonparametrically via a deep neural network. To attain sparse coefficients, we train the network with a novel lasso regularizer in the form of a projection layer that maps the network's output onto the space of l(1)-constrained linear models. An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso without sacrificing the predictive power of a standard deep neural network.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Compressing Deep Neural Networks With Sparse Matrix Factorization
    Wu, Kailun
    Guo, Yiwen
    Zhang, Changshui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 3828 - 3838
  • [32] INVESTIGATING SPARSE DEEP NEURAL NETWORKS FOR SPEECH RECOGNITION
    Pironkov, Gueorgui
    Dupont, Stephane
    Dutoit, Thierry
    2015 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), 2015, : 124 - 129
  • [33] Theoretical Foundations of Deep Learning via Sparse Representations A multilayer sparse model and its connection to convolutional neural networks
    Papyan, Vardan
    Romano, Yaniv
    Sulam, Jeremias
    Elad, Michael
    IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (04) : 72 - 89
  • [34] Non-linear, sparse dimensionality reduction via path lasso penalized autoencoders
    Allerbo, Oskar
    Jörnsten, Rebecka
    Journal of Machine Learning Research, 2021, 22
  • [35] Probabilistic Models with Deep Neural Networks
    Masegosa, Andres R.
    Cabanas, Rafael
    Langseth, Helge
    Nielsen, Thomas D.
    Salmeron, Antonio
    ENTROPY, 2021, 23 (01) : 1 - 27
  • [36] Deep Neural Networks as Scientific Models
    Cichy, Radoslaw M.
    Kaiser, Daniel
    TRENDS IN COGNITIVE SCIENCES, 2019, 23 (04) : 305 - 317
  • [37] Contextual modulation of affect: Comparing humans and deep neural networks
    Shin, Soomin
    Kim, Doo Yon
    Wallraven, Christian
    COMPANION PUBLICATION OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2022, 2022, : 127 - 133
  • [38] DeePCG: Constructing coarse-grained models via deep neural networks
    Zhang, Linfeng
    Han, Jiequn
    Wang, Han
    Car, Roberto
    E, Weinan
    JOURNAL OF CHEMICAL PHYSICS, 2018, 149 (03):
  • [39] Learning fused lasso parameters in portfolio selection via neural networks
    Stefania Corsaro
    Valentina De Simone
    Zelda Marino
    Salvatore Scognamiglio
    Quality & Quantity, 2024, 58 (5) : 4281 - 4299
  • [40] Adaptive lasso in sparse vector autoregressive models
    Lee, Sl Gi
    Baek, Changryong
    KOREAN JOURNAL OF APPLIED STATISTICS, 2016, 29 (01) : 27 - 39