Enhancing Information Maximization With Distance-Aware Contrastive Learning for Source-Free Cross-Domain Few-Shot Learning

被引:3
|
作者
Xu, Huali [1 ,2 ]
Liu, Li [3 ]
Zhi, Shuaifeng [3 ]
Fu, Shaojing [4 ]
Su, Zhuo [2 ]
Cheng, Ming-Ming [5 ]
Liu, Yongxiang [3 ]
机构
[1] Nankai Univ, Coll Comp Sci, Tianjin 300071, Peoples R China
[2] Univ Oulu, Ctr Machine Vis & Signal Anal CMVS, Oulu 90570, Finland
[3] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 410073, Peoples R China
[4] Natl Univ Def Technol, Coll Comp, Changsha 410073, Peoples R China
[5] Nankai Univ, Coll Comp Sci, TKLNDST, Tianjin 300071, Peoples R China
关键词
Cross-domain few-shot learning; source-free; information maximization; distance-aware contrastive learning; transductive learning;
D O I
10.1109/TIP.2024.3374222
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing Cross-Domain Few-Shot Learning (CDFSL) methods require access to source domain data to train a model in the pre-training phase. However, due to increasing concerns about data privacy and the desire to reduce data transmission and training costs, it is necessary to develop a CDFSL solution without accessing source data. For this reason, this paper explores a Source-Free CDFSL (SF-CDFSL) problem, in which CDFSL is addressed through the use of existing pretrained models instead of training a model with source data, avoiding accessing source data. However, due to the lack of source data, we face two key challenges: effectively tackling CDFSL with limited labeled target samples, and the impossibility of addressing domain disparities by aligning source and target domain distributions. This paper proposes an Enhanced Information Maximization with Distance-Aware Contrastive Learning (IM-DCL) method to address these challenges. Firstly, we introduce the transductive mechanism for learning the query set. Secondly, information maximization (IM) is explored to map target samples into both individual certainty and global diversity predictions, helping the source model better fit the target data distribution. However, IM fails to learn the decision boundary of the target task. This motivates us to introduce a novel approach called Distance-Aware Contrastive Learning (DCL), in which we consider the entire feature set as both positive and negative sets, akin to Schrodinger's concept of a dual state. Instead of a rigid separation between positive and negative sets, we employ a weighted distance calculation among features to establish a soft classification of the positive and negative sets for the entire feature set. We explore three types of negative weights to enhance the performance of CDFSL. Furthermore, we address issues related to IM by incorporating contrastive constraints between object features and their corresponding positive and negative sets. Evaluations of the 4 datasets in the BSCD-FSL benchmark indicate that the proposed IM-DCL, without accessing the source domain, demonstrates superiority over existing methods, especially in the distant domain task. Additionally, the ablation study and performance analysis confirmed the ability of IM-DCL to handle SF-CDFSL. The code will be made public at https://github.com/xuhuali-mxj/IM-DCL.
引用
收藏
页码:2058 / 2073
页数:16
相关论文
共 50 条
  • [41] Target Oriented Dynamic Adaption for Cross-Domain Few-Shot Learning
    Chang, Xinyi
    Du, Chunyu
    Song, Xinjing
    Liu, Weifeng
    Wang, Yanjiang
    NEURAL PROCESSING LETTERS, 2024, 56 (03)
  • [42] Cross-domain few-shot learning based on feature adaptive distillation
    Zhang, Dingwei
    Yan, Hui
    Chen, Yadang
    Li, Dichao
    Hao, Chuanyan
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (08): : 4451 - 4465
  • [43] Deep Cross-Domain Few-Shot Learning for Hyperspectral Image Classification
    Li, Zhaokui
    Liu, Ming
    Chen, Yushi
    Xu, Yimin
    Li, Wei
    Du, Qian
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [44] Adaptive Parametric Prototype Learning for Cross-Domain Few-Shot Classification
    Heidari, Marzi
    Alchihabi, Abdullah
    En, Qing
    Guo, Yuhong
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [45] Hyperbolic Insights With Knowledge Distillation for Cross-Domain Few-Shot Learning
    Yang, Xi
    Kong, Dechen
    Wang, Nannan
    Gao, Xinbo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1921 - 1933
  • [46] CROSS-DOMAIN SENTIMENT CLASSIFICATION WITH CONTRASTIVE LEARNING AND MUTUAL INFORMATION MAXIMIZATION
    Li, Tian
    Chen, Xiang
    Zhang, Shanghang
    Dong, Zhen
    Keutzer, Kurt
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 8203 - 8207
  • [47] Cross-domain few-shot learning via adaptive transformer networks
    Paeedeh, Naeem
    Pratama, Mahardhika
    Ma'sum, Muhammad Anwar
    Mayer, Wolfgang
    Cao, Zehong
    Kowlczyk, Ryszard
    Knowledge-Based Systems, 2024, 288
  • [48] DOMAIN-AGNOSTIC META-LEARNING FOR CROSS-DOMAIN FEW-SHOT CLASSIFICATION
    Lee, Wei-Yu
    Wang, Jheng-Yu
    Wang, Yu-Chiang Frank
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1715 - 1719
  • [49] Cross-Domain Cross-Set Few-Shot Learning via Learning Compact and Aligned Representations
    Chen, Wentao
    Zhang, Zhang
    Wang, Wei
    Wang, Liang
    Wang, Zilei
    Tan, Tieniu
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 383 - 399
  • [50] Cross-Domain Few-Shot Relation Extraction via Representation Learning and Domain Adaptation
    Yuan, Zhongju
    Wang, Zhenkun
    Li, Genghui
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,