RDumb: A simple approach that questions our progress in continual test-time adaptation

被引:0
|
作者
Press, Ori [1 ]
Schneider, Steffen [1 ,2 ]
Kummerer, Matthias [1 ]
Bethge, Matthias [1 ]
机构
[1] Univ Tubingen, Tubingen AI Ctr, Tubingen, Germany
[2] EPFL, Geneva, Switzerland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Test-Time Adaptation (TTA) allows to update pre-trained models to changing data distributions at deployment time. While early work tested these algorithms for individual fixed distribution shifts, recent work proposed and applied methods for continual adaptation over long timescales. To examine the reported progress in the field, we propose the Continually Changing Corruptions (CCC) benchmark to measure asymptotic performance of TTA techniques. We find that eventually all but one state-of-the-art methods collapse and perform worse than a non-adapting model, including models specifically proposed to be robust to performance collapse. In addition, we introduce a simple baseline, "RDumb", that periodically resets the model to its pretrained state. RDumb performs better or on par with the previously proposed state-of-the-art in all considered benchmarks. Our results show that previous TTA approaches are neither effective at regularizing adaptation to avoid collapse nor able to outperform a simplistic resetting strategy.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Continual Test-Time Domain Adaptation
    Wang, Qin
    Fink, Olga
    Van Gool, Luc
    Dai, Dengxin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7191 - 7201
  • [2] Navigating Continual Test-time Adaptation with Symbiosis Knowledge
    Yang, Xu
    Li, Mogi
    Yin, Jie
    Wei, Kun
    Deng, Cheng
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 5326 - 5334
  • [3] Multiple Teacher Model for Continual Test-Time Domain Adaptation
    Wang, Ran
    Zuo, Hua
    Fang, Zhen
    Lu, Jie
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 304 - 314
  • [4] Compression and restoration: exploring elasticity in continual test-time adaptation
    Li, Jingwei
    Liu, Chengbao
    Bai, Xiwei
    Tan, Jie
    Chu, Jiaqi
    Wang, Yudong
    MACHINE LEARNING, 2025, 114 (04)
  • [5] Robust Mean Teacher for Continual and Gradual Test-Time Adaptation
    Doebler, Mario
    Marsden, Robert A.
    Yang, Bin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7704 - 7714
  • [6] Exploring Safety Supervision for Continual Test-time Domain Adaptation
    Yang, Xu
    Gu, Yanan
    Wei, Kun
    Deng, Cheng
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1649 - 1657
  • [7] Noise-Robust Continual Test-Time Domain Adaptation
    Yu, Zhiqi
    Li, Jingjing
    Du, Zhekai
    Li, Fengling
    Zhu, Lei
    Yang, Yang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2654 - 2662
  • [8] AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation
    Sojka, Damian
    Cygert, Sebastian
    Twardowski, Bartlomiej
    Trzcinski, Tomasz
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3483 - 3487
  • [9] Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation
    Liu, Jiaming
    Xu, Ran
    Yang, Senqiao
    Zhang, Renrui
    Zhang, Qizhe
    Chen, Zehui
    Guo, Yandong
    Zhang, Shanghang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 28653 - 28663
  • [10] NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation
    Gong, Taesik
    Jeong, Jongheon
    Kim, Taewon
    Kim, Yewon
    Shin, Jinwoo
    Lee, Sung-Ju
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,