An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks

被引:1
|
作者
Pabitha, P. [1 ]
Jayasimhan, Anusha [1 ]
机构
[1] Anna Univ, Madras Inst Technol Campus, Dept Comp Technol, Chennai 600044, India
关键词
CNN; deep learning; image classification; model compression;
D O I
10.23919/JCC.fa.2022-0639.202402
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Deep neural networks excel at image identification and computer vision applications such as visual product search, facial recognition, medical image analysis, object detection, semantic segmentation, instance segmentation, and many others. In image and video recognition applications, convolutional neural networks (CNNs) are widely employed. These networks provide better performance but at a higher cost of computation. With the advent of big data, the growing scale of datasets has made processing and model training a time-consuming operation, resulting in longer training times. Moreover, these large scale datasets contain redundant data points that have minimum impact on the final outcome of the model. To address these issues, an accelerated CNN system is proposed for speeding up training by eliminating the noncritical data points during training alongwith a model compression method. Furthermore, the identification of the critical input data is performed by aggregating the data points at two levels of granularity which are used for evaluating the impact on the model output. Extensive experiments are conducted using the proposed method on CIFAR-10 dataset on ResNet models giving a 40% reduction in number of FLOPs with a degradation of just 0.11% accuracy.
引用
收藏
页码:258 / 269
页数:12
相关论文
共 50 条
  • [21] An efficient image dahazing using Googlenet based convolution neural networks
    Harish Babu G
    Venkatram N
    Multimedia Tools and Applications, 2022, 81 : 43897 - 43917
  • [22] A Non-deterministic Training Approach for Memory-Efficient Stochastic Neural Networks
    Golbabaei, Babak
    Zhu, Guangxian
    Kan, Yirong
    Zhang, Renyuan
    Nakashima, Yasuhiko
    2023 IEEE 36TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE, SOCC, 2023, : 232 - 237
  • [23] An efficient image dahazing using Googlenet based convolution neural networks
    Babu, Harish G.
    Venkatram, N.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (30) : 43897 - 43917
  • [24] An Efficient License Plate Recognition System Using Convolution Neural Networks
    Lin, Cheng-Hung
    Lin, Yong-Sin
    Liu, Wei-Chen
    PROCEEDINGS OF 4TH IEEE INTERNATIONAL CONFERENCE ON APPLIED SYSTEM INNOVATION 2018 ( IEEE ICASI 2018 ), 2018, : 224 - 227
  • [25] Generalized RLS approach to the training of neural networks
    Xu, Y
    Wong, KW
    Leung, CS
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (01): : 19 - 34
  • [26] Efficient and effective training of sparse recurrent neural networks
    Shiwei Liu
    Iftitahu Ni’mah
    Vlado Menkovski
    Decebal Constantin Mocanu
    Mykola Pechenizkiy
    Neural Computing and Applications, 2021, 33 : 9625 - 9636
  • [27] Efficient Incremental Training for Deep Convolutional Neural Networks
    Tao, Yudong
    Tu, Yuexuan
    Shyu, Mei-Ling
    2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 286 - 291
  • [28] Data-Efficient Augmentation for Training Neural Networks
    Liu, Tian Yu
    Mirzasoleiman, Baharan
    Advances in Neural Information Processing Systems, 2022, 35
  • [29] An Efficient Optimization Technique for Training Deep Neural Networks
    Mehmood, Faisal
    Ahmad, Shabir
    Whangbo, Taeg Keun
    MATHEMATICS, 2023, 11 (06)
  • [30] An efficient global algorithm for supervised training of neural networks
    Shukla, KK
    Raghunath
    COMPUTERS & ELECTRICAL ENGINEERING, 1999, 25 (03) : 193 - 216