Robust Mean Teacher for Continual and Gradual Test-Time Adaptation

被引:13
|
作者
Doebler, Mario [1 ]
Marsden, Robert A. [1 ]
Yang, Bin [1 ]
机构
[1] Univ Stuttgart, Stuttgart, Germany
关键词
D O I
10.1109/CVPR52729.2023.00744
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since experiencing domain shifts during test-time is inevitable in practice, test-time adaption (TTA) continues to adapt the model after deployment. Recently, the area of continual and gradual test-time adaptation (TTA) emerged. In contrast to standard TTA, continual TTA considers not only a single domain shift, but a sequence of shifts. Gradual TTA further exploits the property that some shifts evolve gradually over time. Since in both settings long test sequences are present, error accumulation needs to be addressed for methods relying on self-training. In this work, we propose and show that in the setting of TTA, the symmetric cross-entropy is better suited as a consistency loss for mean teachers compared to the commonly used crossentropy. This is justified by our analysis with respect to the (symmetric) cross-entropy's gradient properties. To pull the test feature space closer to the source domain, where the pre-trained model is well posed, contrastive learning is leveraged. Since applications differ in their requirements, we address several settings, including having source data available and the more challenging source-free setting. We demonstrate the effectiveness of our proposed method "robust mean teacher" (RMT) on the continual and gradual corruption benchmarks CIFAR10C, CIFAR100C, and Imagenet-C. We further consider ImageNet-R and propose a new continual DomainNet-126 benchmark. State-of-the-art results are achieved on all benchmarks. (1)
引用
收藏
页码:7704 / 7714
页数:11
相关论文
共 50 条
  • [1] Multiple Teacher Model for Continual Test-Time Domain Adaptation
    Wang, Ran
    Zuo, Hua
    Fang, Zhen
    Lu, Jie
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 304 - 314
  • [2] Noise-Robust Continual Test-Time Domain Adaptation
    Yu, Zhiqi
    Li, Jingjing
    Du, Zhekai
    Li, Fengling
    Zhu, Lei
    Yang, Yang
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2654 - 2662
  • [3] Continual Test-Time Domain Adaptation
    Wang, Qin
    Fink, Olga
    Van Gool, Luc
    Dai, Dengxin
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7191 - 7201
  • [4] NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation
    Gong, Taesik
    Jeong, Jongheon
    Kim, Taewon
    Kim, Yewon
    Shin, Jinwoo
    Lee, Sung-Ju
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Exploring Safety Supervision for Continual Test-time Domain Adaptation
    Yang, Xu
    Gu, Yanan
    Wei, Kun
    Deng, Cheng
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1649 - 1657
  • [6] Robust Test-Time Adaptation in Dynamic Scenarios
    Yuan, Longhui
    Xie, Binhui
    Li, Shuang
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15922 - 15932
  • [7] RDumb: A simple approach that questions our progress in continual test-time adaptation
    Press, Ori
    Schneider, Steffen
    Kummerer, Matthias
    Bethge, Matthias
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] SoTTA: Robust Test-Time Adaptation on Noisy Data Streams
    Gong, Taesik
    Kim, Yewon
    Lee, Taeckyung
    Chottananurak, Sorn
    Lee, Sung-Ju
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Contrastive Test-Time Adaptation
    Chen, Dian
    Wang, Dequan
    Darrell, Trevor
    Ibrahimi, Sayna
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 295 - 305
  • [10] Robust Test-Time Adaptation for Zero-Shot Prompt Tuning
    Zhang, Ding-Chu
    Zhou, Zhi
    Li, Yu-Feng
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16714 - 16722