An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks

被引:0
|
作者
P Pabitha
Anusha Jayasimhan
机构
[1] Department of Computer Technology
[2] Madras Institute of Technology Campus
[3] Anna University
关键词
D O I
暂无
中图分类号
TP183 [人工神经网络与计算]; TP391.41 [];
学科分类号
080203 ; 081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks excel at image identification and computer vision applications such as visual product search, facial recognition, medical image analysis, object detection, semantic segmentation,instance segmentation, and many others. In image and video recognition applications, convolutional neural networks(CNNs) are widely employed. These networks provide better performance but at a higher cost of computation. With the advent of big data, the growing scale of datasets has made processing and model training a time-consuming operation, resulting in longer training times. Moreover, these large scale datasets contain redundant data points that have minimum impact on the final outcome of the model. To address these issues, an accelerated CNN system is proposed for speeding up training by eliminating the noncritical data points during training alongwith a model compression method. Furthermore, the identification of the critical input data is performed by aggregating the data points at two levels of granularity which are used for evaluating the impact on the model output.Extensive experiments are conducted using the proposed method on CIFAR-10 dataset on ResNet models giving a 40% reduction in number of FLOPs with a degradation of just 0.11% accuracy.
引用
收藏
页码:258 / 269
页数:12
相关论文
共 50 条
  • [1] An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks
    Pabitha, P.
    Jayasimhan, Anusha
    CHINA COMMUNICATIONS, 2024, 21 (02) : 258 - 269
  • [2] Design of Power-Efficient Training Accelerator for Convolution Neural Networks
    Hong, JiUn
    Arslan, Saad
    Lee, TaeGeon
    Kim, HyungWon
    ELECTRONICS, 2021, 10 (07)
  • [3] An Efficient Approach for Classifying Social Network Events Using Convolution Neural Networks
    Hussain, Ahsan
    Keshavamurthy, Bettahally N.
    Wazarkar, Seema
    ADVANCES IN DATA AND INFORMATION SCIENCES, ICDIS 2017, VOL 2, 2019, 39 : 177 - 184
  • [4] Swarm intelligence based approach for efficient training of regressive neural networks
    Gabriele Maria Lozito
    Alessandro Salvini
    Neural Computing and Applications, 2020, 32 : 10693 - 10704
  • [5] Swarm intelligence based approach for efficient training of regressive neural networks
    Lozito, Gabriele Maria
    Salvini, Alessandro
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (14): : 10693 - 10704
  • [6] Efficient training of backpropagation neural networks
    Otair, Mohammed A.
    Salameh, Walid A.
    NEURAL NETWORK WORLD, 2006, 16 (04) : 291 - 311
  • [7] Fast and Efficient and Training of Neural Networks
    Yu, Hao
    Wilamowski
    3RD INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION, 2010, : 175 - 181
  • [8] Efficient and Reliable Training of Neural Networks
    Yu, Hao
    Wilamowski, Bogdan M.
    HSI: 2009 2ND CONFERENCE ON HUMAN SYSTEM INTERACTIONS, 2009, : 106 - 112
  • [9] Efficient Scheduling in Training Deep Convolution Networks at Large Scale
    Que, Can
    Zhang, Xinming
    IEEE ACCESS, 2018, 6 : 61452 - 61456
  • [10] Efficient Convolution Neural Networks for Object Tracking Using Separable Convolution and Filter Pruning
    Mao, Yuanhong
    He, Zhanzhuang
    Ma, Zhong
    Tang, Xuehan
    Wang, Zhuping
    IEEE ACCESS, 2019, 7 (106466-106474) : 106466 - 106474