An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks

被引:0
|
作者
P Pabitha
Anusha Jayasimhan
机构
[1] Department of Computer Technology
[2] Madras Institute of Technology Campus
[3] Anna University
关键词
D O I
暂无
中图分类号
TP183 [人工神经网络与计算]; TP391.41 [];
学科分类号
080203 ; 081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks excel at image identification and computer vision applications such as visual product search, facial recognition, medical image analysis, object detection, semantic segmentation,instance segmentation, and many others. In image and video recognition applications, convolutional neural networks(CNNs) are widely employed. These networks provide better performance but at a higher cost of computation. With the advent of big data, the growing scale of datasets has made processing and model training a time-consuming operation, resulting in longer training times. Moreover, these large scale datasets contain redundant data points that have minimum impact on the final outcome of the model. To address these issues, an accelerated CNN system is proposed for speeding up training by eliminating the noncritical data points during training alongwith a model compression method. Furthermore, the identification of the critical input data is performed by aggregating the data points at two levels of granularity which are used for evaluating the impact on the model output.Extensive experiments are conducted using the proposed method on CIFAR-10 dataset on ResNet models giving a 40% reduction in number of FLOPs with a degradation of just 0.11% accuracy.
引用
收藏
页码:258 / 269
页数:12
相关论文
共 50 条
  • [31] EXODUS: Stable and efficient training of spiking neural networks
    Bauer, Felix C.
    Lenz, Gregor
    Haghighatshoar, Saeid
    Sheik, Sadique
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [32] Efficient Constructive Techniques for Training Switching Neural Networks
    Ferrari, Enrico
    Muselli, Marco
    CONSTRUCTIVE NEURAL NETWORKS, 2009, 258 : 25 - 48
  • [33] Efficient Training of Artificial Neural Networks for Autonomous Navigation
    Pomerleau, Dean A.
    NEURAL COMPUTATION, 1991, 3 (01) : 88 - 97
  • [34] Efficient and effective training of sparse recurrent neural networks
    Liu, Shiwei
    Ni'mah, Iftitahu
    Menkovski, Vlado
    Mocanu, Decebal Constantin
    Pechenizkiy, Mykola
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (15): : 9625 - 9636
  • [35] Accurate, efficient and scalable training of Graph Neural Networks
    Zeng, Hanqing
    Zhou, Hongkuan
    Srivastava, Ajitesh
    Kannan, Rajgopal
    Prasanna, Viktor
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2021, 147 : 166 - 183
  • [36] Data-Efficient Augmentation for Training Neural Networks
    Liu, Tian Yu
    Mirzasoleiman, Baharan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [37] Efficient Training of Low-Curvature Neural Networks
    Srinivas, Suraj
    Matoba, Kyle
    Lakkaraju, Himabindu
    Fleuret, Francois
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [38] Efficient Communications in Training Large Scale Neural Networks
    Zhao, Yiyang
    Wang, Linnan
    Wu, Wei
    Bosilca, George
    Vuduc, Richard
    Ye, Jinmian
    Tang, Wenqi
    Xu, Zenglin
    PROCEEDINGS OF THE THEMATIC WORKSHOPS OF ACM MULTIMEDIA 2017 (THEMATIC WORKSHOPS'17), 2017, : 110 - 116
  • [39] Efficient training of large neural networks for language modeling
    Schwenk, H
    2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 3059 - 3064
  • [40] Efficient EM training algorithm for probability neural networks
    Xiong, Hanchun
    He, Qianhua
    Li, Haizhou
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 1998, 26 (07): : 25 - 32