An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks

被引:0
|
作者
P Pabitha
Anusha Jayasimhan
机构
[1] Department of Computer Technology
[2] Madras Institute of Technology Campus
[3] Anna University
关键词
D O I
暂无
中图分类号
TP183 [人工神经网络与计算]; TP391.41 [];
学科分类号
080203 ; 081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks excel at image identification and computer vision applications such as visual product search, facial recognition, medical image analysis, object detection, semantic segmentation,instance segmentation, and many others. In image and video recognition applications, convolutional neural networks(CNNs) are widely employed. These networks provide better performance but at a higher cost of computation. With the advent of big data, the growing scale of datasets has made processing and model training a time-consuming operation, resulting in longer training times. Moreover, these large scale datasets contain redundant data points that have minimum impact on the final outcome of the model. To address these issues, an accelerated CNN system is proposed for speeding up training by eliminating the noncritical data points during training alongwith a model compression method. Furthermore, the identification of the critical input data is performed by aggregating the data points at two levels of granularity which are used for evaluating the impact on the model output.Extensive experiments are conducted using the proposed method on CIFAR-10 dataset on ResNet models giving a 40% reduction in number of FLOPs with a degradation of just 0.11% accuracy.
引用
收藏
页码:258 / 269
页数:12
相关论文
共 50 条
  • [41] Efficient Training of Graph Neural Networks on Large Graphs
    Shen, Yanyan
    Chen, Lei
    Fang, Jingzhi
    Zhang, Xin
    Gao, Shihong
    Yin, Hongbo
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (12): : 4237 - 4240
  • [42] Efficient training of RBF neural networks for pattern recognition
    Lampariello, F
    Sciandrone, M
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (05): : 1235 - 1242
  • [43] MPIC: Exploring alternative approach to standard convolution in deep neural networks
    Jiang, Jie
    Zhong, Yi
    Yang, Ruoli
    Quan, Weize
    Yan, Dong-Ming
    NEURAL NETWORKS, 2025, 184
  • [44] A novel method for speed training acceleration of recurrent neural networks
    Bilski, Jaroslaw
    Rutkowski, Leszek
    Smolag, Jacek
    Tao, Dacheng
    INFORMATION SCIENCES, 2021, 553 : 266 - 279
  • [45] An Analysis of Instance Selection for Neural Networks to Improve Training Speed
    Sun, Xunhu
    Chan, Philip K.
    2014 13TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2014, : 288 - 293
  • [46] An efficient categorization of liver cirrhosis using convolution neural networks for health informatics
    Suganya, R.
    Rajaram, S.
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 1): : 47 - 56
  • [47] An Energy-efficient Convolution Unit for Depthwise Separable Convolutional Neural Networks
    Chong, Yi Sheng
    Goh, Wang Ling
    Ong, Yew Soon
    Nambiar, Vishnu P.
    Do, Anh Tuan
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [48] LEFV: A Lightweight and Efficient System for Face Verification with Deep Convolution Neural Networks
    Liu, Ming
    Zhang, Ping
    Li, Qingbao
    Liu, Jinjin
    Chen, Zhifeng
    ICVIP 2019: PROCEEDINGS OF 2019 3RD INTERNATIONAL CONFERENCE ON VIDEO AND IMAGE PROCESSING, 2019, : 222 - 227
  • [49] Searching for Energy-Efficient Hybrid Adder-Convolution Neural Networks
    Li, Wenshuo
    Chen, Xinghao
    Bai, Jinyu
    Ning, Xuefei
    Wang, Yunhe
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 1942 - 1951
  • [50] An efficient categorization of liver cirrhosis using convolution neural networks for health informatics
    R. Suganya
    S. Rajaram
    Cluster Computing, 2019, 22 : 47 - 56