E2: Entropy Discrimination and Energy Optimization for Source-free Universal Domain Adaptation

被引:2
|
作者
Shen, Meng [1 ]
Ma, Andy J. [1 ,3 ,4 ]
Yuen, Pong C. [2 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
[3] Guangdong Prov Key Lab Informat Secur Technol, Guangzhou, Peoples R China
[4] Minist Educ, Key Lab Machine Intelligence & Adv Comp, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Universal Domain Adaptation; Source-free Domain Adaptation; Confidence-guided Entropy; Energy;
D O I
10.1109/ICME55011.2023.00460
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Universal domain adaptation (UniDA) transfers knowledge under both distribution and category shifts. Most UniDA methods accessible to source-domain data during model adaptation may result in privacy policy violation and source-data transfer inefficiency. To address this issue, we propose a novel source-free UniDA method coupling confidence-guided entropy discrimination and likelihood-induced energy optimization. The entropy-based separation of target-known and unknown classes is too conservative for known-class prediction. Thus, we derive the confidence-guided entropy by scaling the normalized prediction score with the known-class confidence, that more known-class samples are correctly predicted. Due to difficult estimation of the marginal distribution without source-domain data, we constrain the target-domain marginal distribution by maximizing (minimizing) the known (unknown)-class likelihood, which equals free energy optimization. Theoretically, the overall optimization amounts to decreasing and increasing internal energy of known and unknown classes in physics, respectively. Extensive experiments demonstrate the superiority of the proposed method.
引用
收藏
页码:2705 / 2710
页数:6
相关论文
共 50 条
  • [1] Universal Source-Free Domain Adaptation
    Kundu, Jogendra Nath
    Venkat, Naveen
    Rahul, M., V
    Babu, R. Venkatesh
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4543 - 4552
  • [2] LEAD: Learning Decomposition for Source-free Universal Domain Adaptation
    Qui, Sanqing
    Zou, Tianpei
    He, Lianghua
    Roehrbein, Florian
    Knoll, Alois
    Chen, Guang
    Jiang, Changjun
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23334 - 23343
  • [3] Collaborative Learning of Diverse Experts for Source-free Universal Domain Adaptation
    Shen, Meng
    Lu, Yanzuo
    Hu, Yanxu
    Ma, Andy J.
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2054 - 2065
  • [4] USDAP: universal source-free domain adaptation based on prompt learning
    Shao, Xun
    Shao, Mingwen
    Chen, Sijie
    Liu, Yuanyuan
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (05)
  • [5] Generalized Source-free Domain Adaptation
    Yang, Shiqi
    Wang, Yaxing
    van de Weijer, Joost
    Herranz, Luis
    Jui, Shangling
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8958 - 8967
  • [6] Imbalanced Source-free Domain Adaptation
    Li, Xinhao
    Li, Jingjing
    Zhu, Lei
    Wang, Guoqing
    Huang, Zi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 3330 - 3339
  • [7] Source bias reduction for source-free domain adaptation
    Tian, Liang
    Ye, Mao
    Zhou, Lihua
    Wang, Zhenbin
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (SUPPL 1) : 883 - 893
  • [8] Source-free domain adaptation with unrestricted source hypothesis
    He, Jiujun
    Wu, Liang
    Tao, Chaofan
    Lv, Fengmao
    Pattern Recognition, 2024, 149
  • [9] Source-free domain adaptation with unrestricted source hypothesis
    He, Jiujun
    Wu, Liang
    Tao, Chaofan
    Lv, Fengmao
    PATTERN RECOGNITION, 2024, 149
  • [10] Adversarial Source Generation for Source-Free Domain Adaptation
    Cui, Chaoran
    Meng, Fan'an
    Zhang, Chunyun
    Liu, Ziyi
    Zhu, Lei
    Gong, Shuai
    Lin, Xue
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4887 - 4898