Learning With Sharing: An Edge-Optimized Incremental Learning Method for Deep Neural Networks

被引:3
|
作者
Hussain, Muhammad Awais [1 ]
Huang, Shih-An [1 ]
Tsai, Tsung-Han [2 ]
机构
[1] Natl Cent Univ, Elect Engn, Taoyuan 320, Taiwan
[2] Natl Cent Univ, Dept Elect Engn, Taoyuan 320, Taiwan
关键词
Incremental learning; deep neural networks; learning on-chip; energy-efficient learning; network sharing;
D O I
10.1109/TETC.2022.3210905
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Incremental learning techniques aim to increase the capability of Deep Neural Network (DNN) model to add new classes in the pre-trained model. However, DNNs suffer from catastrophic forgetting during the incremental learning process. Existing incremental learning techniques try to reduce the effect of catastrophic forgetting by either using previous samples of data while adding new classes in the model or designing complex model architectures. This leads to high design complexity and memory requirements which make incremental learning impossible to implement on edge devices that have limited memory and computation resources. So, we propose a new incremental learning technique Learning with Sharing (LwS) based on the concept of transfer learning. The main aims of LwS are reduction in training complexity and storage memory requirements while achieving high accuracy during the incremental learning process. We perform cloning and sharing of full connected (FC) layers to add new classes in the model incrementally. Our proposed technique can preserve the knowledge of existing classes and add new classes without storing data from the previous classes. We show that our proposed technique outperforms the state-of-the-art techniques in accuracy comparison for Cifar-100, Caltech-101, and UCSD Birds datasets.
引用
收藏
页码:461 / 473
页数:13
相关论文
共 50 条
  • [31] Deep learning in neural networks: An overview
    Schmidhuber, Juergen
    [J]. NEURAL NETWORKS, 2015, 61 : 85 - 117
  • [32] Deep associative learning for neural networks
    Liu, Jia
    Zhang, Wenhua
    Liu, Fang
    Xiao, Liang
    [J]. NEUROCOMPUTING, 2021, 443 : 222 - 234
  • [33] Fast learning in Deep Neural Networks
    Chandra, B.
    Sharma, Rajesh K.
    [J]. NEUROCOMPUTING, 2016, 171 : 1205 - 1215
  • [34] Shortcut learning in deep neural networks
    Robert Geirhos
    Jörn-Henrik Jacobsen
    Claudio Michaelis
    Richard Zemel
    Wieland Brendel
    Matthias Bethge
    Felix A. Wichmann
    [J]. Nature Machine Intelligence, 2020, 2 : 665 - 673
  • [35] Artificial neural networks and deep learning
    Geubbelmans, Melvin
    Rousseau, Axel-Jan
    Burzykowski, Tomasz
    Valkenborg, Dirk
    [J]. AMERICAN JOURNAL OF ORTHODONTICS AND DENTOFACIAL ORTHOPEDICS, 2024, 165 (02) : 248 - 251
  • [36] Collaborative Learning for Deep Neural Networks
    Song, Guocong
    Chai, Wei
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [37] Big learning and deep neural networks
    Montavon, Grégoire
    Müller, Klaus-Robert
    [J]. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, 7700 LECTURE NO : 419 - 420
  • [38] Multiplierless Neural Networks for Deep Learning
    Banduka, Maja Lutovac
    Lutovac, Miroslav
    [J]. 2024 13TH MEDITERRANEAN CONFERENCE ON EMBEDDED COMPUTING, MECO 2024, 2024, : 262 - 265
  • [39] Shortcut learning in deep neural networks
    Geirhos, Robert
    Jacobsen, Joern-Henrik
    Michaelis, Claudio
    Zemel, Richard
    Brendel, Wieland
    Bethge, Matthias
    Wichmann, Felix A.
    [J]. NATURE MACHINE INTELLIGENCE, 2020, 2 (11) : 665 - 673
  • [40] Incremental Method for Learning Parameters in Evidential Networks
    Ben Hariz, Narjes
    Ben Yaghlane, Boutheina
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE: FROM THEORY TO PRACTICE (IEA/AIE 2017), PT II, 2017, 10351 : 163 - 170