EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization

被引:10
|
作者
Song, Junha [1 ,2 ]
Lee, Jungsoo [1 ]
Kweon, In So [2 ]
Choi, Sungha [1 ]
机构
[1] Qualcomm AI Res, San Diego, CA 92121 USA
[2] Korea Adv Inst Sci & Technol, Daejeon, South Korea
关键词
D O I
10.1109/CVPR52729.2023.01147
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a simple yet effective approach that improves continual test-time adaptation (TTA) in a memory-efficient manner. TTA may primarily be conducted on edge devices with limited memory, so reducing memory is crucial but has been overlooked in previous TTA studies. In addition, long-term adaptation often leads to catastrophic forgetting and error accumulation, which hinders applying TTA in real-world deployments. Our approach consists of two components to address these issues. First, we present lightweight meta networks that can adapt the frozen original networks to the target domain. This novel architecture minimizes memory consumption by decreasing the size of intermediate activations required for backpropagation. Second, our novel self-distilled regularization controls the output of the meta networks not to deviate significantly from the output of the frozen original networks, thereby preserving well-trained knowledge from the source domain. Without additional memory, this regularization prevents error accumulation and catastrophic forgetting, resulting in stable performance even in long-term test-time adaptation. We demonstrate that our simple yet effective strategy outperforms other state-of-the-art methods on various benchmarks for image classification and semantic segmentation tasks. Notably, our proposed method with ResNet-50 and WideResNet-40 takes 86% and 80% less memory than the recent state-of-the-art method, CoTTA.
引用
收藏
页码:11920 / 11929
页数:10
相关论文
共 38 条
  • [1] Continual Test-Time Domain Adaptation
    Wang, Qin
    Fink, Olga
    Van Gool, Luc
    Dai, Dengxin
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7191 - 7201
  • [2] Multiple Teacher Model for Continual Test-Time Domain Adaptation
    Wang, Ran
    Zuo, Hua
    Fang, Zhen
    Lu, Jie
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 304 - 314
  • [3] Robust Mean Teacher for Continual and Gradual Test-Time Adaptation
    Doebler, Mario
    Marsden, Robert A.
    Yang, Bin
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7704 - 7714
  • [4] Exploring Safety Supervision for Continual Test-time Domain Adaptation
    Yang, Xu
    Gu, Yanan
    Wei, Kun
    Deng, Cheng
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1649 - 1657
  • [5] Noise-Robust Continual Test-Time Domain Adaptation
    Yu, Zhiqi
    Li, Jingjing
    Du, Zhekai
    Li, Fengling
    Zhu, Lei
    Yang, Yang
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2654 - 2662
  • [6] Test-time adaptation via self-training with future information
    Wen, Xin
    Shen, Hao
    Zhao, Zhongqiu
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (03)
  • [7] NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation
    Gong, Taesik
    Jeong, Jongheon
    Kim, Taewon
    Kim, Yewon
    Shin, Jinwoo
    Lee, Sung-Ju
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] RDumb: A simple approach that questions our progress in continual test-time adaptation
    Press, Ori
    Schneider, Steffen
    Kummerer, Matthias
    Bethge, Matthias
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Efficient Test-Time Model Adaptation without Forgetting
    Niu, Shuaicheng
    Wu, Jiaxiang
    Zhang, Yifan
    Chen, Yaofo
    Zheng, Shijian
    Zhao, Peilin
    Tan, Mingkui
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [10] Improving Test-Time Adaptation Via Shift-Agnostic Weight Regularization and Nearest Source Prototypes
    Choi, Sungha
    Yang, Seunghan
    Choi, Seokeon
    Yun, Sungrack
    [J]. COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 440 - 458