Parameter-free Online Test-time Adaptation

被引:43
|
作者
Boudiaf, Malik [1 ]
Mueller, Romain [2 ]
Ben Ayed, Ismail [1 ]
Bertinetto, Luca [2 ]
机构
[1] ETS Montreal, Montreal, PQ, Canada
[2] FiveAI, Cambridge, England
关键词
CUTS;
D O I
10.1109/CVPR52688.2022.00816
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training state-of-the-art vision models has become prohibitively expensive for researchers and practitioners. For the sake of accessibility and resource reuse, it is important to focus on adapting these models to a variety of downstream scenarios. An interesting and practical paradigm is online test-time adaptation, according to which training data is inaccessible, no labelled data from the test distribution is available, and adaptation can only happen at test time and on a handful of samples. In this paper, we investigate how test-time adaptation methods fare for a number of pre-trained models on a variety of real-world scenarios, significantly extending the way they have been originally evaluated. We show that they perform well only in narrowly-defined experimental setups and sometimes fail catastrophically when their hyperparameters are not selected for the same scenario in which they are being tested. Motivated by the inherent uncertainty around the conditions that will ultimately be encountered at test time, we propose a particularly "conservative" approach, which addresses the problem with a Laplacian Adjusted Maximum-likelihood Estimation (LAME) objective. By adapting the model's output (not its parameters), and solving our objective with an efficient concave-convex procedure, our approach exhibits a much higher average accuracy across scenarios than existing methods, while being notably faster and have a much lower memory footprint. The code is available at https://github.com/fiveai/LAME.
引用
收藏
页码:8334 / 8343
页数:10
相关论文
共 50 条
  • [1] In Search of Lost Online Test-Time Adaptation: A Survey
    Wang, Zixin
    Luo, Yadan
    Zheng, Liang
    Chen, Zhuoxiao
    Wang, Sen
    Huang, Zi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1106 - 1139
  • [2] Calibration-free online test-time adaptation for electroencephalography motor imagery decoding
    Wimpff, Martin
    Doebler, Mario
    Yang, Bin
    2024 12TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI 2024, 2024,
  • [3] Online Adaptive Fault Diagnosis With Test-Time Domain Adaptation
    Wu, Kangkai
    Li, Jingjing
    Meng, Lichao
    Li, Fengling
    Lu, Ke
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2025, 21 (01) : 107 - 117
  • [4] Test-Time Poisoning Attacks Against Test-Time Adaptation Models
    Cong, Tianshuo
    He, Xinlei
    Shen, Yun
    Zhang, Yang
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 1306 - 1324
  • [5] Contrastive Test-Time Adaptation
    Chen, Dian
    Wang, Dequan
    Darrell, Trevor
    Ibrahimi, Sayna
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 295 - 305
  • [6] Online Test-Time Adaptation for Patient-Independent Seizure Prediction
    Mao, Tingting
    Li, Chang
    Zhao, Yuchang
    Song, Rencheng
    Chen, Xun
    IEEE SENSORS JOURNAL, 2023, 23 (19) : 23133 - 23144
  • [7] AETTA: Label-Free Accuracy Estimation for Test-Time Adaptation
    Lee, Taeckyung
    Chottananurak, Sorn
    Gong, Taesik
    Lee, Sung-Ju
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 28643 - 28652
  • [8] Train/Test-Time Adaptation with Retrieval
    Zancato, Luca
    Achille, Alessandro
    Liu, Tian Yu
    Trager, Matthew
    Perera, Pramuditha
    Soatto, Stefano
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15911 - 15921
  • [9] TEA: Test-time Energy Adaptation
    Yuan, Yige
    Xu, Bingbing
    Hou, Liang
    Sun, Fei
    Shen, Huawei
    Cheng, Xueqi
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23901 - 23911
  • [10] Online Subloop Search via Uncertainty Quantization for Efficient Test-Time Adaptation
    Lee, Jae-Hong
    Lee, Sang-Eon
    Kim, Dong-Hyun
    Kim, DoHee
    Chang, Joon-Hyuk
    INTERSPEECH 2024, 2024, : 2880 - 2884