A Divide-and-Conquer Strategy for Cross-Domain Few-Shot Learning

被引:0
|
作者
Wang, Bingxin [1 ]
Yu, Dehong [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Mech Engn, 28 Xianning West Rd, Xian 710049, Peoples R China
来源
ELECTRONICS | 2025年 / 14卷 / 03期
基金
中国国家自然科学基金;
关键词
cross-domain few-shot learning; domain metric; divide-and-conquer strategy; whitened PCA;
D O I
10.3390/electronics14030418
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-Domain Few-Shot Learning (CD-FSL) aims to empower machines with the capability to rapidly acquire new concepts across domains using an extremely limited number of training samples from the target domain. This ability hinges on the model's capacity to extract and transfer generalizable knowledge from a source training set. Studies have indicated that the similarity between source and target-data distributions, as well as the difficulty of target tasks, determine the classification performance of the model. However, the current lack of quantitative metrics hampers researchers' ability to devise appropriate learning strategies, leading to a fragmented understanding of the field. To address this issue, we propose quantitative metrics of domain distance and target difficulty, which allow us to categorize target tasks into three regions on a two-dimensional plane: near-domain tasks, far-domain low-difficulty tasks, and far-domain high-difficulty tasks. For datasets in different regions, we propose a Divide-and-Conquer Strategy (DCS) to tackle few-shot classification across various target datasets. Empirical results across 15 target datasets demonstrate the compatibility and effectiveness of our approach, improving the model performance. We conclude that the proposed metrics are reliable and the Divide-and-Conquer Strategy is effective, offering valuable insights and serving as a reference for future research on CD-FSL.
引用
收藏
页数:24
相关论文
共 50 条
  • [21] Multimodal Cross-Domain Few-Shot Learning for Egocentric Action Recognition
    Hatano, Masashi
    Hachiuma, Ryo
    Fujii, Ryo
    Saito, Hideo
    COMPUTER VISION - ECCV 2024, PT XXXIII, 2025, 15091 : 182 - 199
  • [22] Cross-domain few-shot learning based on feature adaptive distillation
    Dingwei Zhang
    Hui Yan
    Yadang Chen
    Dichao Li
    Chuanyan Hao
    Neural Computing and Applications, 2024, 36 : 4451 - 4465
  • [23] CDCNet: Cross-domain few-shot learning with adaptive representation enhancement
    Li, Xueying
    He, Zihang
    Zhang, Lingyan
    Guo, Shaojun
    Hu, Bin
    Guo, Kehua
    PATTERN RECOGNITION, 2025, 162
  • [24] Learning general features to bridge the cross-domain gaps in few-shot
    Li, Xiang
    Luo, Hui
    Zhou, Gaofan
    Peng, Xiaoming
    Wang, Zhixing
    Zhang, Jianlin
    Liu, Dongxu
    Li, Meihui
    Liu, Yungfeng
    KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [25] Cross-domain Few-shot Learning with Task-specific Adapters
    Li, Wei-Hong
    Liu, Xialei
    Bilen, Hakan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 7151 - 7160
  • [26] Task context transformer and GCN for few-shot learning of cross-domain
    Li, Pengfang
    Liu, Fang
    Jiao, Licheng
    Li, Lingling
    Chen, Puhua
    Li, Shuo
    NEUROCOMPUTING, 2023, 548
  • [27] CDFSL-V: Cross-Domain Few-Shot Learning for Videos
    Samarasinghe, Sarinda
    Rizve, Mamshad Nayeem
    Kardan, Navid
    Shah, Mubarak
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11609 - 11618
  • [28] Target Oriented Dynamic Adaption for Cross-Domain Few-Shot Learning
    Chang, Xinyi
    Du, Chunyu
    Song, Xinjing
    Liu, Weifeng
    Wang, Yanjiang
    NEURAL PROCESSING LETTERS, 2024, 56 (03)
  • [29] Task-aware Adaptive Learning for Cross-domain Few-shot Learning
    Guo, Yurong
    Du, Ruoyi
    Dong, Yuan
    Hospedales, Timothy
    Song, Yi-Zhe
    Ma, Zhanyu
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1590 - 1599
  • [30] Cross-domain few-shot learning based on feature adaptive distillation
    Zhang, Dingwei
    Yan, Hui
    Chen, Yadang
    Li, Dichao
    Hao, Chuanyan
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (08): : 4451 - 4465