A Hybrid Improved Neural Networks Algorithm Based on L2 and Dropout Regularization

被引:9
|
作者
Xie, Xiaoyun [1 ,2 ]
Xie, Ming [1 ]
Moshayedi, Ata Jahangir [1 ]
Skandari, Mohammad Hadi Noori [3 ]
机构
[1] Jiangxi Univ Sci & Technol, Sch Informat Engn, 86 Hongqi Ave, Ganzhou 341000, Jiangxi, Peoples R China
[2] Gannan Univ Sci & Technol, Sch Elect Informat Engn, Ganzhou 341000, Peoples R China
[3] Shahrood Univ Technol, Fac Math Sci, Shahrood, Iran
关键词
Quality control;
D O I
10.1155/2022/8220453
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Small samples are prone to overfitting in the neural network training process. This paper proposes an optimization approach based on L2 and dropout regularization called a hybrid improved neural network algorithm to overcome this issue. The proposed model was evaluated based on the Modi.ed National Institute of Standards and Technology (MNIST, grayscale-28 x 28 x1) and Canadian Institute for Advanced Research 10 (CIFAR10, RGB - 32 x 32 x 3) as the training data sets and data applied to the LeNet-5 and Autoencoder neural network architectures. The evaluation is conducted based on cross-validation; the result of the model prediction is used as the.nal measure to evaluate the quality of the model. The results show that the proposed hybrid algorithm can perform more effectively, avoid over.tting, improve the accuracy of network model prediction in classi.cation tasks, and reduce the reconstruction error in the unsupervised domain. In addition, employing the proposed algorithm without increasing the time complexity can reduce the effect of noisy data and bias and improve the training time of neural network models. Quantitative and qualitative experimental results show that the accuracy of using the proposed algorithm in this paper with the MNIST test set has an improvement of 2.3% and 0.9% compared to L2 regularization and dropout regularization, respectively, and based on the CIFAR10 data set, the accuracy improvement of 0.92% compared with L2 regularization and 1.31% concerning dropout regularization. The reconstruction error of using the proposed algorithm in this paper with the MNIST data set has an improvement of 0.00174 and 0.00398 compared to L2 regularization and dropout regularization, respectively, and based on the CIFAR10 data set, the accuracy improvement of 0.00078 compared with L2 regularization and 0.00174 concerning dropout regularization.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] A Hybrid Improved Neural Networks Algorithm Based on L2 and Dropout Regularization
    Xie, Xiaoyun
    Xie, Ming
    Moshayedi, Ata Jahangir
    Skandari, Mohammad Hadi Noori
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022
  • [2] An Analysis of the Regularization between L2 and Dropout in Single Hidden Layer Neural Network
    Phaisangittisagul, Ekachai
    2016 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS, MODELLING AND SIMULATION (ISMS), 2016, : 174 - 179
  • [3] An Improved Variable Kernel Density Estimator Based on L2 Regularization
    Jin, Yi
    He, Yulin
    Huang, Defa
    MATHEMATICS, 2021, 9 (16)
  • [4] Hybridized sine cosine algorithm with convolutional neural networks dropout regularization application
    Bacanin, Nebojsa
    Zivkovic, Miodrag
    Al-Turjman, Fadi
    Venkatachalam, K.
    Trojovsky, Pavel
    Strumberger, Ivana
    Bezdan, Timea
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [5] Hybridized sine cosine algorithm with convolutional neural networks dropout regularization application
    Nebojsa Bacanin
    Miodrag Zivkovic
    Fadi Al-Turjman
    K. Venkatachalam
    Pavel Trojovský
    Ivana Strumberger
    Timea Bezdan
    Scientific Reports, 12
  • [6] Regularization of deep neural networks with spectral dropout
    Khan, Salman H.
    Hayat, Munawar
    Porikli, Fatih
    NEURAL NETWORKS, 2019, 110 : 82 - 90
  • [7] Enhance the Performance of Deep Neural Networks via L2 Regularization on the Input of Activations
    Shi, Guang
    Zhang, Jiangshe
    Li, Huirong
    Wang, Changpeng
    NEURAL PROCESSING LETTERS, 2019, 50 (01) : 57 - 75
  • [8] Enhance the Performance of Deep Neural Networks via L2 Regularization on the Input of Activations
    Guang Shi
    Jiangshe Zhang
    Huirong Li
    Changpeng Wang
    Neural Processing Letters, 2019, 50 : 57 - 75
  • [9] On the training dynamics of deep networks with L2 regularization
    Lewkowycz, Aitor
    Gur-Ari, Guy
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [10] L2/3 regularization: Convergence of iterative thresholding algorithm
    Zhang, Yong
    Ye, Wanzhou
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2015, 33 : 350 - 357