A Hybrid Improved Neural Networks Algorithm Based on L2 and Dropout Regularization

被引:9
|
作者
Xie, Xiaoyun [1 ,2 ]
Xie, Ming [1 ]
Moshayedi, Ata Jahangir [1 ]
Skandari, Mohammad Hadi Noori [3 ]
机构
[1] Jiangxi Univ Sci & Technol, Sch Informat Engn, 86 Hongqi Ave, Ganzhou 341000, Jiangxi, Peoples R China
[2] Gannan Univ Sci & Technol, Sch Elect Informat Engn, Ganzhou 341000, Peoples R China
[3] Shahrood Univ Technol, Fac Math Sci, Shahrood, Iran
关键词
Quality control;
D O I
10.1155/2022/8220453
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Small samples are prone to overfitting in the neural network training process. This paper proposes an optimization approach based on L2 and dropout regularization called a hybrid improved neural network algorithm to overcome this issue. The proposed model was evaluated based on the Modi.ed National Institute of Standards and Technology (MNIST, grayscale-28 x 28 x1) and Canadian Institute for Advanced Research 10 (CIFAR10, RGB - 32 x 32 x 3) as the training data sets and data applied to the LeNet-5 and Autoencoder neural network architectures. The evaluation is conducted based on cross-validation; the result of the model prediction is used as the.nal measure to evaluate the quality of the model. The results show that the proposed hybrid algorithm can perform more effectively, avoid over.tting, improve the accuracy of network model prediction in classi.cation tasks, and reduce the reconstruction error in the unsupervised domain. In addition, employing the proposed algorithm without increasing the time complexity can reduce the effect of noisy data and bias and improve the training time of neural network models. Quantitative and qualitative experimental results show that the accuracy of using the proposed algorithm in this paper with the MNIST test set has an improvement of 2.3% and 0.9% compared to L2 regularization and dropout regularization, respectively, and based on the CIFAR10 data set, the accuracy improvement of 0.92% compared with L2 regularization and 1.31% concerning dropout regularization. The reconstruction error of using the proposed algorithm in this paper with the MNIST data set has an improvement of 0.00174 and 0.00398 compared to L2 regularization and dropout regularization, respectively, and based on the CIFAR10 data set, the accuracy improvement of 0.00078 compared with L2 regularization and 0.00174 concerning dropout regularization.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] CamDrop: A New Explanation of Dropout and A Guided Regularization Method for Deep Neural Networks
    Wang, Hongjun
    Wang, Guangrun
    Li, Guanbin
    Lin, Liang
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 1141 - 1149
  • [32] A Review on Dropout Regularization Approaches for Deep Neural Networks within the Scholarly Domain
    Salehin, Imrus
    Kang, Dae-Ki
    ELECTRONICS, 2023, 12 (14)
  • [33] Batch Normalization and Dropout Regularization in Training Deep Neural Networks with Label Noise
    Rusiecki, Andrzej
    INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, ISDA 2021, 2022, 418 : 57 - 66
  • [34] ISING-DROPOUT: A REGULARIZATION METHOD FOR TRAINING AND COMPRESSION OF DEEP NEURAL NETWORKS
    Salehinejad, Hojjat
    Valaee, Shahrokh
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3602 - 3606
  • [36] Blind Image Restoration Based on l1 - l2 Blur Regularization
    Xiao, Su
    ENGINEERING LETTERS, 2020, 28 (01) : 148 - 154
  • [37] Further results on L2 - L∞ state estimation of delayed neural networks
    Qian, Wei
    Chen, Yonggang
    Liu, Yurong
    Alsaadi, Fuad E.
    NEUROCOMPUTING, 2018, 273 : 509 - 515
  • [38] ELM with L1/L2 regularization constraints
    Feng B.
    Qin K.
    Jiang Z.
    Hanjie Xuebao/Transactions of the China Welding Institution, 2018, 39 (09): : 31 - 35
  • [39] Stochastic PCA with l2 and l1 Regularization
    Mianjy, Poorya
    Arora, Raman
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [40] αl1 - βl2 regularization for sparse recovery
    Ding, Liang
    Han, Weimin
    INVERSE PROBLEMS, 2019, 35 (12)