Prototype-Augmented Contrastive Learning for Few-Shot Unsupervised Domain Adaptation

被引:0
|
作者
Gong, Lu [1 ]
Zhang, Wen [1 ]
Li, Mingkang [1 ]
Zhang, Jiali [1 ]
Zhang, Zili [1 ]
机构
[1] Southwest Univ, Coll Comp & Informat Sci, Chongqing 400715, Peoples R China
关键词
Unsupervised domain adaptation; Self-supervised learning; Few-shot learning; Prototype learning; Contrastive learning;
D O I
10.1007/978-3-031-40292-0_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation aims to learn a classification model from the source domain with much-supervised information, which is applied to the utterly unsupervised target domain. However, collecting enough labeled source samples is difficult in some scenarios, decreasing the effectiveness of previous approaches substantially. Therefore, a more challenging and applicable problem called few-shot unsupervised domain adaptation is considered in this work, where a classifier trained with only a few source labels needs to show strong generalization on the target domain. The prototype-based self-supervised learning method has presented superior performance improvements in addressing this problem, while the quality of the prototype could be further improved. To mitigate this situation, a novel Prototype-Augmented Contrastive Learning is proposed. A new computation strategy is utilized to rectify the source prototypes, which are then used to improve the target prototypes. To better learn semantic information and align features, both in-domain prototype contrastive learning and cross-domain prototype contrastive learning are performed. Extensive experiments are conducted on three widely used benchmarks: Office, OfficeHome, and DomainNet, achieving accuracy improvement of over 3%, 1%, and 0.5%, respectively, demonstrating the effectiveness of the proposed method.
引用
收藏
页码:197 / 210
页数:14
相关论文
共 50 条
  • [41] Multimodal variational contrastive learning for few-shot classification
    Pan, Meihong
    Shen, Hongbin
    APPLIED INTELLIGENCE, 2024, 54 (02) : 1879 - 1892
  • [42] Few-shot image generation with reverse contrastive learning
    Gou, Yao
    Li, Min
    Zhang, Yusen
    He, Zhuzhen
    He, Yujie
    NEURAL NETWORKS, 2024, 169 : 154 - 164
  • [43] A Contrastive learning-based Task Adaptation model for few-shot intent recognition
    Zhang, Xin
    Cai, Fei
    Hu, Xuejun
    Zheng, Jianming
    Chen, Honghui
    INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (03)
  • [44] Supervised Contrastive Learning for Few-Shot Action Classification
    Han, Hongfeng
    Fei, Nanyi
    Lu, Zhiwu
    Wen, Ji-Rong
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT III, 2023, 13715 : 512 - 528
  • [45] Multimodal variational contrastive learning for few-shot classification
    Meihong Pan
    Hongbin Shen
    Applied Intelligence, 2024, 54 : 1879 - 1892
  • [46] Adaptive Parametric Prototype Learning for Cross-Domain Few-Shot Classification
    Heidari, Marzi
    Alchihabi, Abdullah
    En, Qing
    Guo, Yuhong
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [47] Unsupervised few-shot image classification via one-vs-all contrastive learning
    Zijun Zheng
    Xiang Feng
    Huiqun Yu
    Xiuquan Li
    Mengqi Gao
    Applied Intelligence, 2023, 53 : 7833 - 7847
  • [48] Few-shot Object Detection with Refined Contrastive Learning
    Shangguan, Zeyu
    Huai, Lian
    Liu, Tong
    Jiang, Xingqun
    2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2023, : 991 - 996
  • [49] Mitigating prototype shift: Few-shot nested named entity recognition with prototype-attention contrastive learning
    Ming, Hong
    Yang, Jiaoyun
    Liu, Shuo
    Jiang, Lili
    An, Ning
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268
  • [50] Unsupervised few-shot image classification via one-vs-all contrastive learning
    Zheng, Zijun
    Feng, Xiang
    Yu, Huiqun
    Li, Xiuquan
    Gao, Mengqi
    APPLIED INTELLIGENCE, 2023, 53 (07) : 7833 - 7847