l1 Regularization in Two-Layer Neural Networks

被引:5
|
作者
Li, Gen [1 ]
Gu, Yuantao [2 ]
Ding, Jie [3 ]
机构
[1] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
[2] Tsinghua Univ, Dept Elect & Comp Engn, Beijing, Peoples R China
[3] Univ Minnesota, Sch Stat, Minneapolis, MN 55455 USA
关键词
Generalization error; model complexity; neural network; regularization; APPROXIMATION; BOUNDS;
D O I
10.1109/LSP.2021.3129698
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A crucial problem of neural networks is to select an architecture that strikes appropriate tradeoffs between underfitting and overfitting. This work shows that l(1) regularizations for two-layer neural networks can control the generalization error and sparsify the input dimension. In particular, with an appropriate l(1) regularization on the output layer, the network can produce a tight statistical risk. Moreover, an appropriate l(1) regularization on the input layer leads to a risk hound that does not involve the input data dimension. The results also indicate that training a wide neural network with a suitable regularization provides an alternative bias-variance tradeoff to selecting from a candidate set of neural networks. Our analysis is based on a new integration of dimension-based and norm-based complexity analysis to bound the generalization error.
引用
收藏
页码:135 / 139
页数:5
相关论文
共 50 条
  • [1] Smooth group L1/2 regularization for input layer of feedforward neural networks
    Li, Feng
    Zurada, Jacek M.
    Wu, Wei
    [J]. NEUROCOMPUTING, 2018, 314 : 109 - 119
  • [2] Structure Optimization of Neural Networks with L1 Regularization on Gates
    Chang, Qin
    Wang, Junze
    Zhang, Huaqing
    Shi, Lina
    Wang, Jian
    Pal, Nikhil R.
    [J]. 2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 196 - 203
  • [3] Group L1/2 Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks
    Alemu, Habtamu Zegeye
    Zhao, Junhong
    Li, Feng
    Wu, Wei
    [J]. IEEE ACCESS, 2019, 7 : 9540 - 9557
  • [4] Compact Deep Neural Networks with l1,1 and l1,2 Regularization
    Ma, Rongrong
    Niu, Lingfeng
    [J]. 2018 18TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2018, : 1248 - 1254
  • [5] Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO Regularization
    Li G.
    Wang G.
    Ding J.
    [J]. IEEE Transactions on Information Theory, 2023, 69 (09) : 5921 - 5935
  • [6] New method of training two-layer sigmoid neural networks using regularization
    Krutikov, V. N.
    Kazakovtsev, L. A.
    Shkaberina, G. Sh
    Kazakovtsev, V. L.
    [J]. INTERNATIONAL WORKSHOP ADVANCED TECHNOLOGIES IN MATERIAL SCIENCE, MECHANICAL AND AUTOMATION ENGINEERING - MIP: ENGINEERING - 2019, 2019, 537
  • [7] Transformed l1 regularization for learning sparse deep neural networks
    Ma, Rongrong
    Miao, Jianyu
    Niu, Lingfeng
    Zhang, Peng
    [J]. NEURAL NETWORKS, 2019, 119 : 286 - 298
  • [8] Structured Pruning of Convolutional Neural Networks via L1 Regularization
    Yang, Chen
    Yang, Zhenghong
    Khattak, Abdul Mateen
    Yang, Liu
    Zhang, Wenxin
    Gao, Wanlin
    Wang, Minjuan
    [J]. IEEE ACCESS, 2019, 7 : 106385 - 106394
  • [9] Smooth Group L1/2 Regularization for Pruning Convolutional Neural Networks
    Bao, Yuan
    Liu, Zhaobin
    Luo, Zhongxuan
    Yang, Sibo
    [J]. SYMMETRY-BASEL, 2022, 14 (01):
  • [10] Plasticity of two-layer fast neural networks
    Alexeev, AA
    Dorogov, AY
    [J]. JOURNAL OF COMPUTER AND SYSTEMS SCIENCES INTERNATIONAL, 1999, 38 (05) : 786 - 791