The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

被引:0
|
作者
Thompson, Ryan [1 ,2 ]
Dezfouli, Amir [3 ]
Kohn, Robert [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIROs Data61, Eveleigh, Australia
[3] BIMLOGIQ, Sydney, NSW, Australia
关键词
REGRESSION; REGULARIZATION; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse linear models are one of several core tools for interpretable machine learning, a field of emerging importance as predictive models permeate decision-making in many domains. Unfortunately, sparse linear models are far less flexible as functions of their input features than black-box models like deep neural networks. With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects. This dichotomy leads us to the contextual lasso, a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features. The fitting process learns this function nonparametrically via a deep neural network. To attain sparse coefficients, we train the network with a novel lasso regularizer in the form of a projection layer that maps the network's output onto the space of l(1)-constrained linear models. An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso without sacrificing the predictive power of a standard deep neural network.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Spectral deconfounding via perturbed sparse linear models
    Ćevid, Domagoj
    Bühlmann, Peter
    Meinshausen, Nicolai
    Journal of Machine Learning Research, 2020, 21
  • [42] Spectral Deconfounding via Perturbed Sparse Linear Models
    Cevid, Domagoj
    Buhlmann, Peter
    Meinshausen, Nicolai
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [43] Sparse Sliced Inverse Regression via Lasso
    Lin, Qian
    Zhao, Zhigen
    Liu, Jun S.
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2019, 114 (528) : 1726 - 1739
  • [44] Rumor Detection in Social Networks via Deep Contextual Modeling
    Ben Veyseh, Amir Pouran
    Thai, My T.
    Thien Huu Nguyen
    Dou, Dejing
    PROCEEDINGS OF THE 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM 2019), 2019, : 113 - 120
  • [45] AUTOMATIC NODE SELECTION FOR DEEP NEURAL NETWORKS USING GROUP LASSO REGULARIZATION
    Ochiai, Tsubasa
    Matsuda, Shigeki
    Watanabe, Hideyuki
    Katagiri, Shigeru
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5485 - 5489
  • [46] Piecewise linear neural networks and deep learning
    Qinghua Tao
    Li Li
    Xiaolin Huang
    Xiangming Xi
    Shuning Wang
    Johan A. K. Suykens
    Nature Reviews Methods Primers, 2
  • [47] Piecewise linear neural networks and deep learning
    Tao, Qinghua
    Li, Li
    Huang, Xiaolin
    Xi, Xiangming
    Wang, Shuning
    Suykens, Johan A. K.
    NATURE REVIEWS METHODS PRIMERS, 2022, 2 (01):
  • [48] Piecewise linear neural networks and deep learning
    Nature Reviews Methods Primers, 2 (1):
  • [49] On the Number of Linear Regions of Deep Neural Networks
    Montufar, Guido
    Pascanu, Razvan
    Cho, Kyunghyun
    Bengio, Yoshua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [50] Lasso for sparse linear regression with exponentially β-mixing errors
    Xie, Fang
    Xu, Lihu
    Yang, Youcai
    STATISTICS & PROBABILITY LETTERS, 2017, 125 : 64 - 70