Enhance the Hidden Structure of Deep Neural Networks by Double Laplacian Regularization

被引:0
|
作者
Fan, Yetian [1 ]
Yang, Wenyu [2 ]
Song, Bo [3 ]
Yan, Peilei [4 ]
Kang, Xiaoning [5 ,6 ]
机构
[1] Liaoning Univ, Sch Math & Stat, Shenyang 110036, Peoples R China
[2] Huazhong Agr Univ, Coll Sci, Wuhan 430070, Peoples R China
[3] Drexel Univ, Coll Comp & Informat, Philadelphia, PA 19104 USA
[4] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[5] Dongbei Univ Finance & Econ, Inst Supply Chain Analyt, Dalian 116025, Peoples R China
[6] Dongbei Univ Finance & Econ, Int Business Coll, Dalian 116025, Peoples R China
关键词
Index Terms-Graph regularization; deep neural networks; double Laplacian regularization; hidden structure; EXTREME LEARNING-MACHINE;
D O I
10.1109/TCSII.2023.3260248
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Laplacian regularization has been widely used in neural networks due to its ability to improve generalization performance, which enforces adjacent samples with the same labels to share similar features. However, most existing methods only consider the global structure of the data with the same labels, but neglect samples in boundary areas with different labels. To address this limitation and improve performance, this brief proposes a novel regularization method that enhances the hidden structure of deep neural networks. Our proposed method imposes a double Laplacian regularization on the objective function and leverages full data information to capture its hidden structure in the manifold space. The double Laplacian regularization applies both attraction and repulsion effects on the hidden layer, which encourages the hidden features of instances with the same label to be closer, and forces those of different categories to be further away. Extensive experiments demonstrate the proposed method leads to significant improvements in accuracy on different types of deep neural networks.
引用
收藏
页码:3114 / 3118
页数:5
相关论文
共 50 条
  • [21] Bridgeout: Stochastic Bridge Regularization for Deep Neural Networks
    Khan, Najeeb
    Shah, Jawad
    Stavness, Ian
    IEEE ACCESS, 2018, 6 : 42961 - 42970
  • [22] Optimizing for interpretability in deep neural networks with tree regularization
    Wu M.
    Parbhoo S.
    Hughes M.C.
    Roth V.
    Doshi-Velez F.
    Journal of Artificial Intelligence Research, 2021, 72
  • [23] Theory of adaptive SVD regularization for deep neural networks
    Bejani, Mohammad Mahdi
    Ghatee, Mehdi
    NEURAL NETWORKS, 2020, 128 : 33 - 46
  • [24] Optimizing for Interpretability in Deep Neural Networks with Tree Regularization
    Wu, Mike
    Parbhoo, Sonali
    Hughes, Michael C.
    Roth, Volker
    Doshi-Velez, Finale
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2021, 72 : 1 - 37
  • [25] Generalize Deep Neural Networks With Adaptive Regularization for Classifying
    Guo, Kehua
    Tao, Ze
    Zhang, Lingyan
    Hu, Bin
    Kui, Xiaoyan
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (01) : 1216 - 1229
  • [26] A Numerical Approach for the Fractional Laplacian via Deep Neural Networks
    Valenzuela, Nicolas
    INTELLIGENT COMPUTING, VOL 2, 2024, 2024, 1017 : 187 - 219
  • [27] Optimal hidden structure for feedforward neural networks
    Bachiller, P.
    Perez, R. M.
    Martinez, P.
    Aguilar, P. L.
    Diaz, P.
    COMPUTATIONAL INTELLIGENCE: THEORY AND APPLICATIONS, 1999, 1625 : 684 - 685
  • [28] The Weights Reset Technique for Deep Neural Networks Implicit Regularization
    Plusch, Grigoriy
    Arsenyev-Obraztsov, Sergey
    Kochueva, Olga
    COMPUTATION, 2023, 11 (08)
  • [29] Deep Neural Networks Pruning via the Structured Perspective Regularization
    Cacciola, Matteo
    Frangioni, Antonio
    Li, Xinlin
    Lodi, Andrea
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (04): : 1051 - 1077
  • [30] The Analysis of Regularization in Deep Neural Networks Using Metagraph Approach
    Fedorenko, Yuriy S.
    Gapanyuk, Yuriy E.
    Minakova, Svetlana V.
    ADVANCES IN NEURAL COMPUTATION, MACHINE LEARNING, AND COGNITIVE RESEARCH, 2018, 736 : 3 - 8