Enhance the Hidden Structure of Deep Neural Networks by Double Laplacian Regularization

被引:0
|
作者
Fan, Yetian [1 ]
Yang, Wenyu [2 ]
Song, Bo [3 ]
Yan, Peilei [4 ]
Kang, Xiaoning [5 ,6 ]
机构
[1] Liaoning Univ, Sch Math & Stat, Shenyang 110036, Peoples R China
[2] Huazhong Agr Univ, Coll Sci, Wuhan 430070, Peoples R China
[3] Drexel Univ, Coll Comp & Informat, Philadelphia, PA 19104 USA
[4] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[5] Dongbei Univ Finance & Econ, Inst Supply Chain Analyt, Dalian 116025, Peoples R China
[6] Dongbei Univ Finance & Econ, Int Business Coll, Dalian 116025, Peoples R China
关键词
Index Terms-Graph regularization; deep neural networks; double Laplacian regularization; hidden structure; EXTREME LEARNING-MACHINE;
D O I
10.1109/TCSII.2023.3260248
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Laplacian regularization has been widely used in neural networks due to its ability to improve generalization performance, which enforces adjacent samples with the same labels to share similar features. However, most existing methods only consider the global structure of the data with the same labels, but neglect samples in boundary areas with different labels. To address this limitation and improve performance, this brief proposes a novel regularization method that enhances the hidden structure of deep neural networks. Our proposed method imposes a double Laplacian regularization on the objective function and leverages full data information to capture its hidden structure in the manifold space. The double Laplacian regularization applies both attraction and repulsion effects on the hidden layer, which encourages the hidden features of instances with the same label to be closer, and forces those of different categories to be further away. Extensive experiments demonstrate the proposed method leads to significant improvements in accuracy on different types of deep neural networks.
引用
收藏
页码:3114 / 3118
页数:5
相关论文
共 50 条
  • [1] Threshout Regularization for Deep Neural Networks
    Williams, Travis
    Li, Robert
    SOUTHEASTCON 2021, 2021, : 728 - 735
  • [2] Feedforward Neural Networks with a Hidden Layer Regularization Method
    Alemu, Habtamu Zegeye
    Wu, Wei
    Zhao, Junhong
    SYMMETRY-BASEL, 2018, 10 (10):
  • [3] Regularization of hidden layer unit response for neural networks
    Taga, K
    Kameyama, K
    Toraichi, K
    2003 IEEE PACIFIC RIM CONFERENCE ON COMMUNICATIONS, COMPUTERS, AND SIGNAL PROCESSING, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2003, : 348 - 351
  • [4] Enhance the Performance of Deep Neural Networks via L2 Regularization on the Input of Activations
    Shi, Guang
    Zhang, Jiangshe
    Li, Huirong
    Wang, Changpeng
    NEURAL PROCESSING LETTERS, 2019, 50 (01) : 57 - 75
  • [5] Enhance the Performance of Deep Neural Networks via L2 Regularization on the Input of Activations
    Guang Shi
    Jiangshe Zhang
    Huirong Li
    Changpeng Wang
    Neural Processing Letters, 2019, 50 : 57 - 75
  • [6] Towards Stochasticity of Regularization in Deep Neural Networks
    Sandjakoska, Ljubinka
    Bogdanova, Ana Madevska
    2018 14TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2018,
  • [7] Regularization of deep neural networks with spectral dropout
    Khan, Salman H.
    Hayat, Munawar
    Porikli, Fatih
    NEURAL NETWORKS, 2019, 110 : 82 - 90
  • [8] Sparse synthesis regularization with deep neural networks
    Obmann, Daniel
    Schwab, Johannes
    Haltmeier, Markus
    2019 13TH INTERNATIONAL CONFERENCE ON SAMPLING THEORY AND APPLICATIONS (SAMPTA), 2019,
  • [9] Group sparse regularization for deep neural networks
    Scardapane, Simone
    Comminiello, Danilo
    Hussain, Amir
    Uncini, Aurelio
    NEUROCOMPUTING, 2017, 241 : 81 - 89
  • [10] LocalDrop: A Hybrid Regularization for Deep Neural Networks
    Lu, Ziqing
    Xu, Chang
    Du, Bo
    Ishida, Takashi
    Zhang, Lefei
    Sugiyama, Masashi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) : 3590 - 3601