Wide deep residual networks in networks

被引:4
|
作者
Alaeddine, Hmidi [1 ]
Jihene, Malek [1 ,2 ]
机构
[1] Monastir Univ, Fac Sci Monastir, Lab Elect & Microelect, LR99ES30, Monastir 5000, Tunisia
[2] Sousse Univ, Higher Inst Appl Sci & Technol Sousse, Sousse 4000, Tunisia
关键词
Deep network in network; Convolution neural network; CIFAR-10;
D O I
10.1007/s11042-022-13696-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Deep Residual Network in Network (DrNIN) model [18] is an important extension of the convolutional neural network (CNN). They have proven capable of scaling up to dozens of layers. This model exploits a nonlinear function, to replace linear filter, for the convolution represented in the layers of multilayer perceptron (MLP) [23]. Increasing the depth of DrNIN can contribute to improved classification and detection accuracy. However, training the deep model becomes more difficult, the training time slows down, and a problem of decreasing feature reuse arises. To address these issues, in this paper, we conduct a detailed experimental study on the architecture of DrMLPconv blocks, based on which we present a new model that represents a wider model of DrNIN. In this model, we increase the width of the DrNINs and decrease the depth. We call the result module (WDrNIN). On the CIFAR-10 dataset, we will provide an experimental study showing that WDrNIN models can gain accuracy through increased width. Moreover, we demonstrate that even a single WDrNIN outperforms all network-based models in MLPconv network models in accuracy and efficiency with an accuracy equivalent to 93.553% for WDrNIN-4-2.
引用
收藏
页码:7889 / 7899
页数:11
相关论文
共 50 条
  • [31] Propagation Mechanism for Deep and Wide Neural Networks
    Xu, Dejiang
    Lee, Mong Li
    Hsu, Wynne
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 9212 - 9220
  • [32] Deep and Wide Neural Networks Covariance Estimation
    Arratia, Argimiro
    Cabana, Alejandra
    Rafael Leon, Jose
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 195 - 206
  • [33] The Loss Surface of Deep and Wide Neural Networks
    Quynh Nguyen
    Hein, Matthias
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [34] Learning Deep and Wide: A Spectral Method for Learning Deep Networks
    Shao, Ling
    Wu, Di
    Li, Xuelong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (12) : 2303 - 2308
  • [35] High precision in microRNA prediction: A novel genome-wide approach with convolutional deep residual networks
    Yones, C.
    Raad, J.
    Bugnon, L. A.
    Milone, D. H.
    Stegmayer, G.
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 134
  • [36] Deep contextual recurrent residual networks for scene labeling
    Le, T. Hoang Ngan
    Chi Nhan Duong
    Han, Ligong
    Luu, Khoa
    Quach, Kha Gia
    Savvides, Marios
    PATTERN RECOGNITION, 2018, 80 : 32 - 41
  • [37] Visualizing Apparent Personality Analysis with Deep Residual Networks
    Gucluturk, Yagmur
    Guclu, Umut
    Perez, Marc
    Jair Escalante, Hugo
    Baro, Xavier
    Guyon, Isabelle
    Andujar, Carlos
    Jacques, Julio, Jr.
    Madadi, Meysam
    Escalera, Sergio
    van Gerven, Marcel A. J.
    van Lier, Rob
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 3101 - 3109
  • [38] DRCDN: learning deep residual convolutional dehazing networks
    Shengdong Zhang
    Fazhi He
    The Visual Computer, 2020, 36 : 1797 - 1808
  • [39] GLOBALLY CONVERGENT MULTILEVEL TRAINING OF DEEP RESIDUAL NETWORKS
    Kopanicakova, Alena
    Krause, Rolf
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2023, 45 (03): : S254 - S280
  • [40] Revolutionizing Image Recognition and Beyond with Deep Residual Networks
    Baraneedharan, P.
    Nithyasri, A.
    Keerthana, P.
    COMMUNICATION AND INTELLIGENT SYSTEMS, VOL 1, ICCIS 2023, 2024, 967 : 441 - 448