On the stable recovery of deep structured linear networks under sparsity constraints

被引:0
|
作者
Malgouyres, Francois [1 ,2 ,3 ,4 ]
机构
[1] Inst Math Toulouse, F-31062 Toulouse 9, France
[2] Univ Toulouse, UMR5219, F-31062 Toulouse 9, France
[3] CNRS, UPS IMT, F-31062 Toulouse 9, France
[4] Inst Rech Technol St Exupery, Toulouse, France
来源
MATHEMATICAL AND SCIENTIFIC MACHINE LEARNING, VOL 107 | 2020年 / 107卷
关键词
Stable recovery; deep structured linear networks; convolutional linear networks; feature robustess;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider a deep structured linear network under sparsity constraints. We study sharp conditions guaranteeing the stability of the optimal parameters defining the network. More precisely, we provide sharp conditions on the network architecture and the sample under which the error on the parameters defining the network scales linearly with the reconstruction error (i.e. the risk). Therefore, under these conditions, the weights obtained with a successful algorithms are well defined and only depend on the architecture of the network and the sample. The features in the latent spaces are stably defined. The stability property is required in order to interpret the features defined in the latent spaces. It can also lead to a guarantee on the statistical risk. This is what motivates this study. The analysis is based on the recently proposed Tensorial Lifting. The particularity of this paper is to consider a sparsity prior. This leads to a better stability constant. As an illustration, we detail the analysis and provide sharp stability guarantees for convolutional linear network under sparsity prior. In this analysis, we distinguish the role of the network architecture and the sample input. This highlights the requirements on the data in connection to parameter stability.
引用
收藏
页码:107 / 127
页数:21
相关论文
共 50 条
  • [1] Learning Structured Sparsity in Deep Neural Networks
    Wen, Wei
    Wu, Chunpeng
    Wang, Yandan
    Chen, Yiran
    Li, Hai
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [2] Observability of a Linear System Under Sparsity Constraints
    Dai, Wei
    Yueksel, Serdar
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2013, 58 (09) : 2372 - 2376
  • [3] EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS
    Trede, Dennis
    METHODS AND APPLICATIONS OF ANALYSIS, 2011, 18 (01) : 105 - 110
  • [4] Variance-Guided Structured Sparsity in Deep Neural Networks
    Pandit M.K.
    Banday M.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (06): : 1714 - 1723
  • [5] Controllability of Linear Dynamical Systems Under Input Sparsity Constraints
    Joseph, Geethu
    Murthy, Chandra R.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (02) : 924 - 931
  • [6] Feature flow regularization: Improving structured sparsity in deep neural networks
    Wu, Yue
    Lan, Yuan
    Zhang, Luchan
    Xiang, Yang
    NEURAL NETWORKS, 2023, 161 : 598 - 613
  • [7] JOINT OPTIMIZATION OF QUANTIZATION AND STRUCTURED SPARSITY FOR COMPRESSED DEEP NEURAL NETWORKS
    Srivastava, Gaurav
    Kadetotad, Deepak
    Yin, Shihui
    Berisha, Visar
    Chakrabarti, Chaitali
    Seo, Jae-sun
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1393 - 1397
  • [8] OMP Based Joint Sparsity Pattern Recovery Under Communication Constraints
    Wimalajeewa, Thakshila
    Varshney, Pramod K.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (19) : 5059 - 5072
  • [9] On Sparse Recovery with Structured Noise Under Sensing Constraints
    Mardani, Davood
    Atia, George K.
    2017 51ST ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2017,
  • [10] Measurement Bounds for Observability of Linear Dynamical Systems Under Sparsity Constraints
    Joseph, Geethu
    Murthy, Chandra Ramabhadra
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (08) : 1992 - 2006