A pruning feedforward small-world neural network by dynamic sparse regularization with smoothing l1/2 norm for nonlinear system modeling

被引:4
|
作者
Li, Wenjing [1 ]
Chu, Minghui
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
基金
中国国家自然科学基金;
关键词
Feedforward small-world neural network; Weight pruning; Dynamic regularization; Smoothingl1; 2; norm; CONVERGENCE; APPROXIMATION; ALGORITHM;
D O I
10.1016/j.asoc.2023.110133
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Aiming to solve the problem of the relatively large architecture for the small-world neural network and improve its generalization ability, we propose a pruning feedforward small-world neural network based on a dynamic regularization method with the smoothing l1/2 norm (PFSWNN-DSRL1/2) and apply it to nonlinear system modeling. A feedforward small-world neural network is first constructed by the rewiring rule of Watts-Strogatz. By minimizing the modified error function added with a smoothing l1/2 norm, redundant weights are pruned to generate a sparse architecture. A dynamic adjusting strategy is further designed for the regularization strength to balance the tradeoff between the training accuracy and the sparsity. Several experiments are carried out to evaluate the performance of the proposed PFSWNN-DSRL1/2 on nonlinear system modeling. The results show that the PFSWNN-DSRL1/2 can achieve the satisfactory modeling accuracy with an average of 17% pruned weights. The comparative results demonstrate that the generalization performance of the proposed model is improved by 8.1% relative to the baseline method (FSWNN) but with a sparse structure, and the pruning does not degenerate its small-world property.(c) 2023 Elsevier B.V. All rights reserved.
引用
下载
收藏
页数:14
相关论文
共 39 条
  • [1] A pruning feedforward small-world neural network based on Katz centrality for nonlinear system modeling
    Li, Wenjing
    Chu, Minghui
    Qiao, Junfei
    NEURAL NETWORKS, 2020, 130 : 269 - 285
  • [2] A Fast Feedforward Small-World Neural Network for Nonlinear System Modeling
    Li, Wenjing
    Li, Zhigang
    Qiao, Junfei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 13
  • [3] Group L1/2 Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks
    Alemu, Habtamu Zegeye
    Zhao, Junhong
    Li, Feng
    Wu, Wei
    IEEE ACCESS, 2019, 7 : 9540 - 9557
  • [4] Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks
    Wu, Wei
    Fan, Qinwei
    Zurada, Jacek M.
    Wang, Jian
    Yang, Dakun
    Liu, Yan
    NEURAL NETWORKS, 2014, 50 : 72 - 78
  • [5] A Simple Neural Network for Sparse Optimization With l1 Regularization
    Ma, Litao
    Bian, Wei
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (04): : 3430 - 3442
  • [7] Convergence of online gradient method for feedforward neural networks with smoothing L1/2 regularization penalty
    Fan, Qinwei
    Zurada, Jacek M.
    Wu, Wei
    NEUROCOMPUTING, 2014, 131 : 208 - 216
  • [8] Sparse Feature Grouping based on l1/2 Norm Regularization
    Mao, Wentao
    Xu, Wentao
    Li, Yuan
    2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, : 1045 - 1051
  • [9] Sparse minimal learning machines via l1/2 norm regularization
    Dias, Madson L. D.
    Freire, Ananda L.
    Souza Junior, Amauri H.
    da Rocha Neto, Ajalmar R.
    Gomes, Joao P. P.
    2018 7TH BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 2018, : 206 - 211
  • [10] Smooth Group L1/2 Regularization for Pruning Convolutional Neural Networks
    Bao, Yuan
    Liu, Zhaobin
    Luo, Zhongxuan
    Yang, Sibo
    SYMMETRY-BASEL, 2022, 14 (01):