Enhancing Information Maximization With Distance-Aware Contrastive Learning for Source-Free Cross-Domain Few-Shot Learning

被引:3
|
作者
Xu, Huali [1 ,2 ]
Liu, Li [3 ]
Zhi, Shuaifeng [3 ]
Fu, Shaojing [4 ]
Su, Zhuo [2 ]
Cheng, Ming-Ming [5 ]
Liu, Yongxiang [3 ]
机构
[1] Nankai Univ, Coll Comp Sci, Tianjin 300071, Peoples R China
[2] Univ Oulu, Ctr Machine Vis & Signal Anal CMVS, Oulu 90570, Finland
[3] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 410073, Peoples R China
[4] Natl Univ Def Technol, Coll Comp, Changsha 410073, Peoples R China
[5] Nankai Univ, Coll Comp Sci, TKLNDST, Tianjin 300071, Peoples R China
关键词
Cross-domain few-shot learning; source-free; information maximization; distance-aware contrastive learning; transductive learning;
D O I
10.1109/TIP.2024.3374222
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing Cross-Domain Few-Shot Learning (CDFSL) methods require access to source domain data to train a model in the pre-training phase. However, due to increasing concerns about data privacy and the desire to reduce data transmission and training costs, it is necessary to develop a CDFSL solution without accessing source data. For this reason, this paper explores a Source-Free CDFSL (SF-CDFSL) problem, in which CDFSL is addressed through the use of existing pretrained models instead of training a model with source data, avoiding accessing source data. However, due to the lack of source data, we face two key challenges: effectively tackling CDFSL with limited labeled target samples, and the impossibility of addressing domain disparities by aligning source and target domain distributions. This paper proposes an Enhanced Information Maximization with Distance-Aware Contrastive Learning (IM-DCL) method to address these challenges. Firstly, we introduce the transductive mechanism for learning the query set. Secondly, information maximization (IM) is explored to map target samples into both individual certainty and global diversity predictions, helping the source model better fit the target data distribution. However, IM fails to learn the decision boundary of the target task. This motivates us to introduce a novel approach called Distance-Aware Contrastive Learning (DCL), in which we consider the entire feature set as both positive and negative sets, akin to Schrodinger's concept of a dual state. Instead of a rigid separation between positive and negative sets, we employ a weighted distance calculation among features to establish a soft classification of the positive and negative sets for the entire feature set. We explore three types of negative weights to enhance the performance of CDFSL. Furthermore, we address issues related to IM by incorporating contrastive constraints between object features and their corresponding positive and negative sets. Evaluations of the 4 datasets in the BSCD-FSL benchmark indicate that the proposed IM-DCL, without accessing the source domain, demonstrates superiority over existing methods, especially in the distant domain task. Additionally, the ablation study and performance analysis confirmed the ability of IM-DCL to handle SF-CDFSL. The code will be made public at https://github.com/xuhuali-mxj/IM-DCL.
引用
收藏
页码:2058 / 2073
页数:16
相关论文
共 50 条
  • [1] Visual Domain Bridge: A source-free domain adaptation for cross-domain few-shot learning
    Yazdanpanah, Moslem
    Moradi, Parham
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 2867 - 2876
  • [2] Ranking Distance Calibration for Cross-Domain Few-Shot Learning
    Li, Pan
    Gong, Shaogang
    Wang, Chengjie
    Fu, Yanwei
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9089 - 9098
  • [3] Free-Lunch for Cross-Domain Few-Shot Learning: Style-Aware Episodic Training with Robust Contrastive Learning
    Zhang, Ji
    Song, Jingkuan
    Gao, Lianli
    Shen, Hengtao
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 2586 - 2594
  • [4] Cross-Domain Few-Shot Contrastive Learning for Hyperspectral Images Classification
    Zhang, Suhua
    Chen, Zhikui
    Wang, Dan
    Wang, Z. Jane
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [5] Task-aware Adaptive Learning for Cross-domain Few-shot Learning
    Guo, Yurong
    Du, Ruoyi
    Dong, Yuan
    Hospedales, Timothy
    Song, Yi-Zhe
    Ma, Zhanyu
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1590 - 1599
  • [6] HybridPrompt: Domain-Aware Prompting for Cross-Domain Few-Shot Learning
    Wu, Jiamin
    Zhang, Tianzhu
    Zhang, Yongdong
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (12) : 5681 - 5697
  • [7] HybridPrompt: Domain-Aware Prompting for Cross-Domain Few-Shot Learning
    Wu, Jiamin
    Zhang, Tianzhu
    Zhang, Yongdong
    International Journal of Computer Vision, 132 (12): : 5681 - 5697
  • [8] Knowledge transduction for cross-domain few-shot learning
    Li, Pengfang
    Liu, Fang
    Jiao, Licheng
    Li, Shuo
    Li, Lingling
    Liu, Xu
    Huang, Xinyan
    PATTERN RECOGNITION, 2023, 141
  • [9] Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
    Oh, Jaehoon
    Kim, Sungnyun
    Ho, Namgyu
    Kim, Jin-Hwa
    Song, Hwanjun
    Yun, Se-Young
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [10] CoConGAN: Cooperative contrastive learning for few-shot cross-domain heterogeneous face translation
    Yinghui Zhang
    Wansong Hu
    Bo Sun
    Jun He
    Lejun Yu
    Neural Computing and Applications, 2023, 35 : 15019 - 15032