Self-Attention Message Passing for Contrastive Few-Shot Learning

被引:0
|
作者
Shirekar, Ojas Kishorkumar [1 ,2 ]
Singh, Anuj [1 ,2 ]
Jamali-Rad, Hadi [1 ,2 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
[2] Shell Global Solut Int BV, Amsterdam, Netherlands
关键词
D O I
10.1109/WACV56688.2023.00539
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans have a unique ability to learn new representations from just a handful of examples with little to no supervision. Deep learning models, however, require an abundance of data and supervision to perform at a satisfactory level. Unsupervised few-shot learning (U-FSL) is the pursuit of bridging this gap between machines and humans. Inspired by the capacity of graph neural networks (GNNs) in discovering complex inter-sample relationships, we propose a novel self-attention based message passing contrastive learning approach (coined as SAMP-CLR) for U-FSL pre-training. We also propose an optimal transport (OT) based fine-tuning strategy (we call OpT-Tune) to efficiently induce task awareness into our novel end-to-end unsupervised few-shot classification framework (SAMPTransfer). Our extensive experimental results corroborate the efficacy of SAMPTransfer in a variety of downstream few-shot classification scenarios, setting a new state-of-the-art for U-FSL on both miniImageNet and tieredImageNet benchmarks, offering up to 7%+ and 5%+ improvements, respectively. Our further investigations also confirm that SAMPTransfer remains on-par with some supervised baselines on miniImageNet and outperforms all existing U-FSL baselines in a challenging cross-domain scenario. Our code can be found in our GitHub repository: https://github.com/ojss/SAMPTransfer/.
引用
收藏
页码:5415 / 5425
页数:11
相关论文
共 50 条
  • [31] CDANER: Contrastive Learning with Cross-domain Attention for Few-shot Named Entity Recognition
    Li, Wei
    Li, Hui
    Ge, Jingguo
    Zhang, Lei
    Li, Liangxiong
    Wu, Bingzhen
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [32] SPContrastNet: A Self-paced Contrastive Learning Model for Few-shot Text Classification
    Chen, Junfan
    Zhang, Richong
    Jiang, Xiaohan
    Hu, Chunming
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (05)
  • [33] Few-shot Medical Image Segmentation Regularized with Self-reference and Contrastive Learning
    Wang, Runze
    Zhou, Qin
    Zheng, Guoyan
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT IV, 2022, 13434 : 514 - 523
  • [34] Contrastive knowledge-augmented self-distillation approach for few-shot learning
    Zhang, Lixu
    Shao, Mingwen
    Chen, Sijie
    Liu, Fukang
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (05)
  • [35] Mask Mixup Model: Enhanced Contrastive Learning for Few-Shot Learning
    Xie, Kai
    Gao, Yuxuan
    Chen, Yadang
    Che, Xun
    APPLIED SCIENCES-BASEL, 2024, 14 (14):
  • [36] Self-attention network for few-shot learning based on nearest-neighbor algorithm (vol 34, 28, 2023)
    Wang, Guangpeng
    Wang, Yongxiong
    MACHINE VISION AND APPLICATIONS, 2023, 34 (02)
  • [37] Few-Shot Relation Prediction of Knowledge Graph via Convolutional Neural Network with Self-Attention
    Zhong, Shanna
    Wang, Jiahui
    Yue, Kun
    Duan, Liang
    Sun, Zhengbao
    Fang, Yan
    DATA SCIENCE AND ENGINEERING, 2023, 8 (04) : 385 - 395
  • [38] CLG: Contrastive Label Generation with Knowledge for Few-Shot Learning
    Ma, Han
    Fan, Baoyu
    Ng, Benjamin K.
    Lam, Chan-Tong
    MATHEMATICS, 2024, 12 (03)
  • [39] ContrastNet: A Contrastive Learning Framework for Few-Shot Text Classification
    Chen, Junfan
    Zhang, Richong
    Mao, Yongyi
    Xu, Jie
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10492 - 10500
  • [40] Boosting Few-Shot Classification with Lie Group Contrastive Learning
    He, Feihong
    Li, Fanzhang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT I, 2023, 14254 : 99 - 111