Continuous transfer of neural network representational similarity for incremental learning

被引:31
|
作者
Tian, Songsong [1 ,2 ]
Li, Weijun [1 ,3 ,4 ]
Ning, Xin [1 ,3 ,4 ,5 ]
Ran, Hang [1 ]
Qin, Hong [1 ,3 ,4 ]
Tiwari, Prayag [6 ]
机构
[1] Chinese Acad Sci, Inst Semicond, Beijing 100083, Peoples R China
[2] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100049, Peoples R China
[3] Univ Chinese Acad Sci, Ctr Mat Sci & Optoelect Engn, Beijing 100049, Peoples R China
[4] Univ Chinese Acad Sci, Sch Integrated Circuits, Beijing 100049, Peoples R China
[5] Zhongke Ruitu Technol Co Ltd, Beijing 100096, Peoples R China
[6] Halmstad Univ, Sch Informat Technol, S-30118 Halmstad, Sweden
关键词
Incremental learning; Pre-trained model; Knowledge distillation; Neural network representation;
D O I
10.1016/j.neucom.2023.126300
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The incremental learning paradigm in machine learning has consistently been a focus of academic research. It is similar to the way in which biological systems learn, and reduces energy consumption by avoiding excessive retraining. Existing studies utilize the powerful feature extraction capabilities of pre-trained models to address incremental learning, but there remains a problem of insufficient utiliza-tion of neural network feature knowledge. To address this issue, this paper proposes a novel method called Pre-trained Model Knowledge Distillation (PMKD) which combines knowledge distillation of neu-ral network representations and replay. This paper designs a loss function based on centered kernel align-ment to transfer neural network representations knowledge from the pre-trained model to the incremental model layer-by-layer. Additionally, the use of memory buffer for Dark Experience Replay helps the model retain past knowledge better. Experiments show that PMKD achieved superior perfor-mance on various datasets and different buffer sizes. Compared to other methods, our class incremental learning accuracy reached the best performance. The open-source code is published athttps://github.-com/TianSongS/PMKD-IL.(c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Feasibility Study of Incremental Neural Network Based Test Escape Detection by Introducing Transfer Learning Technique
    Takaya, Ayano
    Shintani, Michihiro
    [J]. 2023 IEEE INTERNATIONAL TEST CONFERENCE IN ASIA, ITC-ASIA, 2023,
  • [22] Deep Representational Similarity Learning for Analyzing Neural Signatures in Task-based fMRI Dataset
    Yousefnezhad, Muhammad
    Sawalha, Jeffrey
    Selvitella, Alessandro
    Zhang, Daoqiang
    [J]. NEUROINFORMATICS, 2021, 19 (03) : 417 - 431
  • [23] Deep Representational Similarity Learning for Analyzing Neural Signatures in Task-based fMRI Dataset
    Muhammad Yousefnezhad
    Jeffrey Sawalha
    Alessandro Selvitella
    Daoqiang Zhang
    [J]. Neuroinformatics, 2021, 19 : 417 - 431
  • [24] Knowledge Transfer in Incremental Learning for Multilingual Neural Machine Translation
    Huang, Kaiyu
    Li, Peng
    Ma, Jin
    Yao, Ting
    Liu, Yang
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 15286 - 15304
  • [25] Transfer Learning for Human Activity Recognition Using Representational Analysis of Neural Networks
    An, Sizhe
    Bhat, Ganapati
    Gumussoy, Suat
    Ogras, Umit
    [J]. ACM Transactions on Computing for Healthcare, 2023, 4 (01):
  • [26] Learning time-series similarity with a neural network by combining similarity measures
    Sagrebin, Maria
    Goerke, Nils
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2006, PT 2, 2006, 4132 : 123 - 132
  • [27] Evolving improved incremental learning schemes for neural network systems
    Seipone, T
    Bullinaria, JA
    [J]. 2005 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-3, PROCEEDINGS, 2005, : 2002 - 2009
  • [28] Using a Gaussian Mixture Neural Network for Incremental Learning and Robotics
    Heinen, Milton Roberto
    Engel, Paulo Martins
    Pinto, Rafael C.
    [J]. 2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [29] A self-organizing incremental neural network for imbalance learning
    Yue Shao
    Baile Xu
    Furao Shen
    Jian Zhao
    [J]. Neural Computing and Applications, 2023, 35 : 9789 - 9802
  • [30] Incremental Learning of Deep Neural Network for Robust Vehicle Classification
    Athriyah, Ahmad Mimi Nathiratul
    Amir, Abdul Kadir Muhammad
    Zaki, Hasan F. M.
    Zulkifli, Zainal Abidin
    Hasbullah, Abdul Rahman
    [J]. JURNAL KEJURUTERAAN, 2022, 34 (05): : 843 - 850