DCL: Dipolar Confidence Learning for Source-Free Unsupervised Domain Adaptation

被引:0
|
作者
Tian, Qing [1 ,2 ,3 ]
Sun, Heyang [4 ,5 ]
Peng, Shun [6 ]
Zheng, Yuhui [7 ]
Wan, Jun [8 ]
Lei, Zhen [6 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Software, Wuxi 214000, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Wuxi Inst Technol, Wuxi 214000, Peoples R China
[3] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[4] Nanjing Univ Informat Sci & Technol, Sch Software, Nanjing 210044, Peoples R China
[5] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 210016, Peoples R China
[6] Nanjing Univ Informat Sci & Technol, Sch Software, Nanjing 210044, Peoples R China
[7] Qinghai Normal Univ, Coll Comp, Xining 810016, Peoples R China
[8] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Adaptation models; Data models; Task analysis; Predictive models; Generators; Feature extraction; Training; Source-free unsupervised domain adaptation (SFUDA); dipolar confidence learning (DCL); fuzzy mixup; rotation-based self-supervised learning;
D O I
10.1109/TCSVT.2023.3332353
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Source-free unsupervised domain adaptation (SFUDA) aims to conduct prediction on the target domain by leveraging knowledge from the well-trained source model. Due to the absence of source data in the SFUDA setting, the existing methods mainly build the target classifier by fine-tuning the source model incorporated with empirical adaptation losses. Although these methods have achieved somewhat promising results, nearly all of them typically suffer from the closed-fitting dilemma that their models are dominantly affected by these easy-to-distinguish instances than those hard-to-distinguish ones, resulting from the absence of the labeled source data. To address aforementioned issues, we propose the Dipolar Confidence Learning (DCL) for SFUDA. Specifically, we conduct positive confidence learning on the samples with standard outputs to avoid overfitting of the model to these samples. In contrast, we perform negative confidence learning for the samples with abnormal outputs to optimize the complementary label, which forces the network to pay more attention to these confusing samples. Furthermore, to achieve more generalized domain alignment, both the confidence-based fuzzy mixup and rotation-based self-supervised learning are respectively constructed to boost the representation ability of the target model. Finally, extensive experiments are conducted to demonstrate the effectiveness and performance superiority of the proposed method.
引用
收藏
页码:4342 / 4353
页数:12
相关论文
共 50 条
  • [11] Concurrent Subsidiary Supervision for Unsupervised Source-Free Domain Adaptation
    Kundu, Jogendra Nath
    Bhambri, Suvaansh
    Kulkarni, Akshay
    Sarkar, Hiran
    Jampani, Varun
    Babu, R. Venkatesh
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 177 - 194
  • [12] Reducing bias in source-free unsupervised domain adaptation for regression
    Zhan, Qianshan
    Zeng, Xiao-Jun
    Wang, Qian
    NEURAL NETWORKS, 2025, 185
  • [13] Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain Adaptation
    Chen, Weijie
    Lin, Luojun
    Yang, Shicai
    Xie, Di
    Pu, Shiliang
    Zhuang, Yueting
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10185 - 10192
  • [14] Cleaning Noisy Labels by Negative Ensemble Learning for Source-Free Unsupervised Domain Adaptation
    Ahmed, Waqar
    Morerio, Pietro
    Murino, Vittorio
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 356 - 365
  • [15] Teacher-Student Mutual Learning for efficient source-free unsupervised domain adaptation
    Li, Wei
    Fan, Kefeng
    Yang, Huihua
    KNOWLEDGE-BASED SYSTEMS, 2023, 261
  • [16] CROSS-INFERENTIAL NETWORKS FOR SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION
    Tang, Yushun
    Guo, Qinghai
    He, Zhihai
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 96 - 100
  • [17] Denoised Maximum Classifier Discrepancy for Source-Free Unsupervised Domain Adaptation
    Chu, Tong
    Liu, Yahao
    Deng, Jinhong
    Li, Wen
    Duan, Lixin
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 472 - 480
  • [18] A Source-Free Unsupervised Domain Adaptation Method Based on Feature Consistency
    Lee, JoonHo
    Lee, Gyemin
    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY, IWAIT 2023, 2023, 12592
  • [19] Source-free unsupervised domain adaptation with maintaining model balance and diversity?
    Tian, Qing
    Peng, Shun
    Sun, Heyang
    Zhou, Jiazhong
    Zhang, Heng
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 104
  • [20] Source-Free Unsupervised Domain Adaptation: Current research and future directions
    Zhang, Ningyuan
    Lu, Jie
    Li, Keqiuyin
    Fang, Zhen
    Zhang, Guangquan
    NEUROCOMPUTING, 2024, 564