Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization

被引:0
|
作者
Iwasawa, Yusuke [1 ]
Matsuo, Yutaka [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a new algorithm for domain generalization (DG), test-time template adjuster (T3A), aiming to robustify a model to unknown distribution shift. Unlike existing methods that focus on training phase, our method focuses test phase, i.e., correcting its prediction by itself during test time. Specifically, T3A adjusts a trained linear classifier (the last layer of deep neural networks) with the following procedure: (1) compute a pseudo-prototype representation for each class using online unlabeled data augmented by the base classifier trained in the source domains, (2) and then classify each sample based on its distance to the pseudo-prototypes. T3A is back-propagation-free and modifies only the linear layer; therefore, the increase in computational cost during inference is negligible and avoids the catastrophic failure might caused by stochastic optimization. Despite its simplicity, T3A can leverage knowledge about the target domain by using off-the-shelf test-time data and improve performance. We tested our method on four domain generalization benchmarks, namely PACS, VLCS, OfficeHome, and TerraIncognita, along with various backbone networks including ResNet18, ResNet50, Big Transfer (BiT), Vision Transformers (ViT), and MLP-Mixer. The results show T3A stably improves performance on unseen domains across choices of backbone networks, and outperforms existing domain generalization methods.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Detecting and Mitigating Test-time Failure Risks via Model-agnostic Uncertainty Learning
    Lahoti, Preethi
    Gummadi, Krishna P.
    Weikum, Gerhard
    2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 1174 - 1179
  • [2] Domain Generalization via Model-Agnostic Learning of Semantic Features
    Dou, Qi
    Castro, Daniel C.
    Kamnitsas, Konstantinos
    Glocker, Ben
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [3] Improved Test-Time Adaptation for Domain Generalization
    Chen, Liang
    Zhang, Yong
    Song, Yibing
    Shan, Ying
    Liu, Lingqiao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24172 - 24182
  • [4] Model-agnostic Measure of Generalization Difficulty
    Boopathy, Akhilan
    Liu, Kevin
    Hwang, Jaedong
    Ge, Shu
    Mohammedsaleh, Asaad
    Fiete, Ila
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [5] Sharpness-Aware Model-Agnostic Long-Tailed Domain Generalization
    Su, Houcheng
    Luo, Weihao
    Liu, Daixian
    Wang, Mengzhu
    Tang, Jing
    Chen, Junyang
    Wang, Cong
    Chen, Zhenghan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 15091 - 15099
  • [6] ConfusionFlow: A Model-Agnostic Visualization for Temporal Analysis of Classifier Confusion
    Hinterreiter, Andreas
    Ruch, Peter
    Stitz, Holger
    Ennemoser, Martin
    Bernard, Jurgen
    Strobelt, Hendrik
    Streit, Marc
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (02) : 1222 - 1236
  • [7] Seek Common Ground While Reserving Differences: A Model-Agnostic Module for Noisy Domain Adaptation
    Zuo, Yukun
    Yao, Hantao
    Zhuang, Liansheng
    Xu, Changsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1020 - 1030
  • [8] INMO: A Model-Agnostic and Scalable Module for Inductive Collaborative Filtering
    Wu, Yunfan
    Cao, Qi
    Shen, Huawei
    Tao, Shuchang
    Cheng, Xueqi
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 91 - 101
  • [9] Robust Text Classifier on Test-Time Budgets
    Parvez, Md Rizwan
    Bolukbasi, Tolga
    Chang, Kai-Wei
    Saligrama, Venkatesh
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 1167 - 1172
  • [10] Multiple Teacher Model for Continual Test-Time Domain Adaptation
    Wang, Ran
    Zuo, Hua
    Fang, Zhen
    Lu, Jie
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 304 - 314