Cycle Self-Training for Domain Adaptation

被引:0
|
作者
Liu, Hong [1 ]
Wang, Jianmin [2 ]
Long, Mingsheng [2 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
[2] Tsinghua Univ, Sch Software, BNRist, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Mainstream approaches for unsupervised domain adaptation (UDA) learn domaininvariant representations to narrow the domain shift, which are empirically effective but theoretically challenged by the hardness or impossibility theorems. Recently, self-training has been gaining momentum in UDA, which exploits unlabeled target data by training with target pseudo-labels. However, as corroborated in this work, under distributional shift, the pseudo-labels can be unreliable in terms of their large discrepancy from target ground truth. In this paper, we propose Cycle Self-Training (CST), a principled self-training algorithm that explicitly enforces pseudo-labels to generalize across domains. CST cycles between a forward step and a reverse step until convergence. In the forward step, CST generates target pseudo-labels with a source-trained classifier. In the reverse step, CST trains a target classifier using target pseudo-labels, and then updates the shared representations to make the target classifier perform well on the source data. We introduce the Tsallis entropy as a confidence-friendly regularization to improve the quality of target pseudo-labels. We analyze CST theoretically under realistic assumptions, and provide hard cases where CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail. Empirical results indicate that CST significantly improves over the state-of-the-arts on visual recognition and sentiment analysis benchmarks.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Understanding Self-Training for Gradual Domain Adaptation
    Kumar, Ananya
    Ma, Tengyu
    Liang, Percy
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [2] Adversarial Domain Adaptation Enhanced via Self-training
    Altinel, Fazil
    Akkaya, Ibrahim Batuhan
    [J]. 29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,
  • [3] Unsupervised Domain Adaptation with Multiple Domain Discriminators and Adaptive Self-Training
    Spadotto, Teo
    Toldo, Marco
    Michieli, Umberto
    Zanuttigh, Pietro
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2845 - 2852
  • [4] Improve conditional adversarial domain adaptation using self-training
    Wang, Zi
    Sun, Xiaoliang
    Su, Ang
    Wang, Gang
    Li, Yang
    Yu, Qifeng
    [J]. IET IMAGE PROCESSING, 2021, 15 (10) : 2169 - 2178
  • [5] Energy-constrained Self-training for Unsupervised Domain Adaptation
    Liu, Xiaofeng
    Hu, Bo
    Liu, Xiongchang
    Lu, Jun
    You, Jane
    Kong, Lingsheng
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7515 - 7520
  • [6] DUAL-CONSISTENCY SELF-TRAINING FOR UNSUPERVISED DOMAIN ADAPTATION
    Wang, Jie
    Zhong, Chaoliang
    Feng, Cheng
    Sun, Jun
    Ide, Masaru
    Yokota, Yasuto
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1529 - 1533
  • [7] Self-training transformer for source-free domain adaptation
    Yang, Guanglei
    Zhong, Zhun
    Ding, Mingli
    Sebe, Nicu
    Ricci, Elisa
    [J]. APPLIED INTELLIGENCE, 2023, 53 (13) : 16560 - 16574
  • [8] Self-training transformer for source-free domain adaptation
    Guanglei Yang
    Zhun Zhong
    Mingli Ding
    Nicu Sebe
    Elisa Ricci
    [J]. Applied Intelligence, 2023, 53 : 16560 - 16574
  • [9] Self-training Guided Adversarial Domain Adaptation For Thermal Imagery
    Akkaya, Ibrahim Batuhan
    Altinel, Fazil
    Halici, Ugur
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 4317 - 4326
  • [10] DaMSTF: Domain Adversarial Learning Enhanced Meta Self-Training for Domain Adaptation
    Lu, Menglong
    Huang, Zhen
    Zhao, Yunxiang
    Tian, Zhiliang
    Liu, Yang
    Li, Dongsheng
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1650 - 1668