Summarizing Stream Data for Memory-Constrained Online Continual Learning

被引:0
|
作者
Gu, Jianyang [1 ,2 ]
Wang, Kai [2 ]
Jiang, Wei [1 ]
You, Yang [2 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Natl Univ Singapore, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Replay-based methods have proved their effectiveness on online continual learning by rehearsing past samples from an auxiliary memory. With many efforts made on improving training schemes based on the memory, however, the information carried by each sample in the memory remains under-investigated. Under circumstances with restricted storage space, the informativeness of the memory becomes critical for effective replay. Although some works design specific strategies to select representative samples, by only employing a small number of original images, the storage space is still not well utilized. To this end, we propose to Summarize the knowledge from the Stream Data (SSD) into more informative samples by distilling the training characteristics of real images. Through maintaining the consistency of training gradients and relationship to the past tasks, the summarized samples are more representative for the stream data compared to the original images. Extensive experiments are conducted on multiple online continual learning benchmarks to support that the proposed SSD method significantly enhances the replay effects. We demonstrate that with limited extra computational overhead, SSD provides more than 3% accuracy boost for sequential CIFAR-100 under extremely restricted memory buffer. Code in https://github.com/vimar-gu/SSD.
引用
收藏
页码:12217 / 12225
页数:9
相关论文
共 50 条
  • [31] Benefits of speedup knowledge in memory-constrained multiprocessor scheduling
    Parsons, EW
    Sevcik, KC
    PERFORMANCE EVALUATION, 1996, 27-8 : 253 - 272
  • [32] Adaptive Flash Sorting for Memory-Constrained Embedded Devices
    Lawrence, Ramon
    36TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2021, 2021, : 321 - 326
  • [33] Middleware specialization for memory-constrained networked embedded systems
    Subramonian, V
    Xing, GL
    Gill, C
    Lu, CY
    Cytron, R
    RTAS 2004: 10TH IEEE REAL-TIME AND EMBEDDED TECHNOLOGY AND APPLICATIONS SYMPOSIUM, PROCEEDINGS, 2004, : 306 - 313
  • [34] Continual compression model for online continual learning
    Ye, Fei
    Bors, Adrian G.
    Applied Soft Computing, 2024, 167
  • [35] Efficient External Sorting for Memory-Constrained Embedded Devices with Flash Memory
    Jackson, Riley
    Gresl, Jonathan
    Lawrence, Ramon
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2021, 20 (04)
  • [36] Heap compression for memory-constrained Java']Java environments
    Chen, G
    Kandemir, M
    Vijaykrishnan, N
    Irwin, MJ
    Mathiske, B
    Wolczko, M
    ACM SIGPLAN NOTICES, 2003, 38 (11) : 282 - 301
  • [37] Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices
    Aggarwal, Shivam
    Binici, Kuluhan
    Mitra, Tulika
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [38] Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices
    Aggarwal, Shivam
    Binici, Kuluhan
    Mitra, Tulika
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (06) : 1663 - 1676
  • [39] Exact memory-constrained UPGMA for large scale speaker clustering
    Cumani, Sandro
    Laface, Pietro
    PATTERN RECOGNITION, 2019, 95 (235-246) : 235 - 246
  • [40] Continual Learning From a Stream of APIs
    Yang, Enneng
    Wang, Zhenyi
    Shen, Li
    Yin, Nan
    Liu, Tongliang
    Guo, Guibing
    Wang, Xingwei
    Tao, Dacheng
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46 (12) : 11432 - 11445