An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks

被引:1
|
作者
Pabitha, P. [1 ]
Jayasimhan, Anusha [1 ]
机构
[1] Anna Univ, Madras Inst Technol Campus, Dept Comp Technol, Chennai 600044, India
关键词
CNN; deep learning; image classification; model compression;
D O I
10.23919/JCC.fa.2022-0639.202402
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Deep neural networks excel at image identification and computer vision applications such as visual product search, facial recognition, medical image analysis, object detection, semantic segmentation, instance segmentation, and many others. In image and video recognition applications, convolutional neural networks (CNNs) are widely employed. These networks provide better performance but at a higher cost of computation. With the advent of big data, the growing scale of datasets has made processing and model training a time-consuming operation, resulting in longer training times. Moreover, these large scale datasets contain redundant data points that have minimum impact on the final outcome of the model. To address these issues, an accelerated CNN system is proposed for speeding up training by eliminating the noncritical data points during training alongwith a model compression method. Furthermore, the identification of the critical input data is performed by aggregating the data points at two levels of granularity which are used for evaluating the impact on the model output. Extensive experiments are conducted using the proposed method on CIFAR-10 dataset on ResNet models giving a 40% reduction in number of FLOPs with a degradation of just 0.11% accuracy.
引用
收藏
页码:258 / 269
页数:12
相关论文
共 50 条
  • [11] TermiNETor: Early Convolution Termination for Efficient Deep Neural Networks
    Mallappa, Uday
    Gangwar, Pranav
    Khaleghi, Behnam
    Yang, Haichao
    Rosing, Tajana
    2022 IEEE 40TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2022), 2022, : 635 - 643
  • [12] Hardware Efficient Convolution Processing Unit for Deep Neural Networks
    Hazarika, Anakhi
    Poddar, Soumyajit
    Rahaman, Hafizur
    2019 2ND INTERNATIONAL SYMPOSIUM ON DEVICES, CIRCUITS AND SYSTEMS (ISDCS 2019), 2019,
  • [13] Winograd Convolution for Deep Neural Networks: Efficient Point Selection
    Alam, Syed Asad
    Anderson, Andrew
    Barabasz, Barbara
    Gregg, David
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (06)
  • [14] Splatter: An Efficient Sparse Image Convolution for Deep Neural Networks
    Lee, Tristan
    Lee, Byeong Kil
    2021 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2021), 2021, : 506 - 509
  • [15] Efficient Hyperparameter Optimization for Convolution Neural Networks in Deep Learning: A Distributed Particle Swarm Optimization Approach
    Guo, Yu
    Li, Jian-Yu
    Zhan, Zhi-Hui
    CYBERNETICS AND SYSTEMS, 2020, 52 (01) : 36 - 57
  • [16] Efficient training of unitary optical neural networks
    Lu, Kunrun
    Guo, Xianxin
    OPTICS EXPRESS, 2023, 31 (24) : 39616 - 39623
  • [17] Efficient training for dendrite morphological neural networks
    Sossa, Humberto
    Guevara, Elizabeth
    NEUROCOMPUTING, 2014, 131 : 132 - 142
  • [18] Efficient training of interval Neural Networks for imprecise training data
    Sadeghi, Jonathan
    de Angelis, Marco
    Patelli, Edoardo
    NEURAL NETWORKS, 2019, 118 : 338 - 351
  • [19] A neural networks approach for wind speed prediction
    Mohandes, MA
    Rehman, S
    Halawani, TO
    RENEWABLE ENERGY, 1998, 13 (03) : 345 - 354
  • [20] ASKs: Convolution with any-shape kernels for efficient neural networks
    Liu, Guangzhe
    Zhang, Ke
    Lv, Meibo
    NEUROCOMPUTING, 2021, 446 : 32 - 49