Incremental Learning in Deep Convolutional Neural Networks Using Partial Network Sharing

被引:55
|
作者
Sarwar, Syed Shakib [1 ]
Ankit, Aayush [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Incremental learning; catastrophic forgetting; lifelong learning; energy-efficient learning; network sharing;
D O I
10.1109/ACCESS.2019.2963056
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep convolutional neural network (DCNN) based supervised learning is a widely practiced approach for large-scale image classification. However, retraining these large networks to accommodate new, previously unseen data demands high computational time and energy requirements. Also, previously seen training samples may not be available at the time of retraining. We propose an efficient training methodology and incrementally growing DCNN to learn new tasks while sharing part of the base network. Our proposed methodology is inspired by transfer learning techniques, although it does not forget previously learned tasks. An updated network, for learning new set of classes, is formed using previously learned convolutional layers (shared from initial part of base network) with addition of few newly added convolutional kernels included in the later layers of the network. We employed a 'clone-and-branch' technique with calibration, which allows the network to learn new tasks (containing classes with similar features as old tasks) one after another without any performance loss in old tasks. We evaluated the proposed scheme on several recognition applications. The classification accuracy achieved by our approach is comparable to the regular incremental learning approach (where networks are updated with new training samples only, without any network sharing), while achieving energy efficiency, reduction in storage requirements, memory access and training time.
引用
收藏
页码:4615 / 4628
页数:14
相关论文
共 50 条
  • [1] INCREMENTAL LEARNING OF CONVOLUTIONAL NEURAL NETWORKS
    Medera, Dusan
    Babinec, Stefan
    [J]. IJCCI 2009: PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON COMPUTATIONAL INTELLIGENCE, 2009, : 547 - +
  • [2] Learning With Sharing: An Edge-Optimized Incremental Learning Method for Deep Neural Networks
    Hussain, Muhammad Awais
    Huang, Shih-An
    Tsai, Tsung-Han
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (02) : 461 - 473
  • [3] Progressive Convolutional Neural Network for Incremental Learning
    Siddiqui, Zahid Ali
    Park, Unsang
    [J]. ELECTRONICS, 2021, 10 (16)
  • [4] Learning Game by Profit Sharing Using Convolutional Neural Network
    Hasuike, Nobuaki
    Osana, Yuko
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT I, 2018, 11139 : 43 - 50
  • [5] Tree-CNN: A hierarchical Deep Convolutional Neural Network for incremental learning
    Roy, Deboleena
    Panda, Priyadarshini
    Roy, Kaushik
    [J]. NEURAL NETWORKS, 2020, 121 : 148 - 160
  • [6] Efficient Incremental Training for Deep Convolutional Neural Networks
    Tao, Yudong
    Tu, Yuexuan
    Shyu, Mei-Ling
    [J]. 2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 286 - 291
  • [7] Comparing Incremental Learning Strategies for Convolutional Neural Networks
    Lomonaco, Vincenzo
    Maltoni, Davide
    [J]. ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, 2016, 9896 : 175 - 184
  • [8] Learning in Action Game by Profit Sharing Using Convolutional Neural Network
    Murakami, Kaichi
    Osana, Yuko
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 739 - 740
  • [9] Classification of Partial Discharge Images Using Deep Convolutional Neural Networks
    Florkowski, Marek
    [J]. ENERGIES, 2020, 13 (20)
  • [10] Detection of pneumonia using convolutional neural networks and deep learning
    Szepesi, Patrik
    Szilagyi, Laszlo
    [J]. BIOCYBERNETICS AND BIOMEDICAL ENGINEERING, 2022, 42 (03) : 1012 - 1022