Self-Attention Message Passing for Contrastive Few-Shot Learning

被引:0
|
作者
Shirekar, Ojas Kishorkumar [1 ,2 ]
Singh, Anuj [1 ,2 ]
Jamali-Rad, Hadi [1 ,2 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
[2] Shell Global Solut Int BV, Amsterdam, Netherlands
关键词
D O I
10.1109/WACV56688.2023.00539
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans have a unique ability to learn new representations from just a handful of examples with little to no supervision. Deep learning models, however, require an abundance of data and supervision to perform at a satisfactory level. Unsupervised few-shot learning (U-FSL) is the pursuit of bridging this gap between machines and humans. Inspired by the capacity of graph neural networks (GNNs) in discovering complex inter-sample relationships, we propose a novel self-attention based message passing contrastive learning approach (coined as SAMP-CLR) for U-FSL pre-training. We also propose an optimal transport (OT) based fine-tuning strategy (we call OpT-Tune) to efficiently induce task awareness into our novel end-to-end unsupervised few-shot classification framework (SAMPTransfer). Our extensive experimental results corroborate the efficacy of SAMPTransfer in a variety of downstream few-shot classification scenarios, setting a new state-of-the-art for U-FSL on both miniImageNet and tieredImageNet benchmarks, offering up to 7%+ and 5%+ improvements, respectively. Our further investigations also confirm that SAMPTransfer remains on-par with some supervised baselines on miniImageNet and outperforms all existing U-FSL baselines in a challenging cross-domain scenario. Our code can be found in our GitHub repository: https://github.com/ojss/SAMPTransfer/.
引用
收藏
页码:5415 / 5425
页数:11
相关论文
共 50 条
  • [1] SELF-ATTENTION RELATION NETWORK FOR FEW-SHOT LEARNING
    Hui, Binyuan
    Zhu, Pengfei
    Hu, Qinghua
    Wang, Qilong
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 198 - 203
  • [2] Few-shot learning based on prototype rectification with a self-attention mechanism
    Zhao, Peng
    Wang, Liang
    Zhao, Xuyang
    Liu, Huiting
    Ji, Xia
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [3] Few-Shot Learning Based on Self-Attention and Auto-Encoder
    Ji, Zhong
    Chai, Xingliang
    Tianjin Daxue Xuebao (Ziran Kexue yu Gongcheng Jishu Ban)/Journal of Tianjin University Science and Technology, 2021, 54 (04): : 338 - 345
  • [4] SaberNet: Self-attention based effective relation network for few-shot learning
    Li, Zijun
    Hu, Zhengping
    Luo, Weiwei
    Hu, Xiao
    PATTERN RECOGNITION, 2023, 133
  • [5] SAPENet: Self-Attention based Prototype Enhancement Network for Few-shot Learning
    Huang, Xilang
    Choi, Seon Han
    PATTERN RECOGNITION, 2023, 135
  • [6] Few-Shot Classification with Contrastive Learning
    Yang, Zhanyuan
    Wang, Jinghua
    Zhu, Yingying
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 293 - 309
  • [7] Few-Shot Few-Shot Learning and the role of Spatial Attention
    Lifchitz, Yann
    Avrithis, Yannis
    Picard, Sylvaine
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2693 - 2700
  • [8] Self-Attention Metric Learning Based on Multiscale Feature Fusion for Few-Shot Fault Diagnosis
    Xie, Jingsong
    Liu, Jie
    Ding, Tianqi
    Wang, Tiantian
    Yu, Tianjian
    IEEE SENSORS JOURNAL, 2023, 23 (17) : 19771 - 19782
  • [9] Adaptive Meta Transfer Learning with Efficient Self-Attention for Few-Shot Bearing Fault Diagnosis
    Zhao, Jun
    Tang, Tang
    Yu, Ying
    Wang, Jingwei
    Yang, Tianyuan
    Chen, Ming
    Wu, Jie
    NEURAL PROCESSING LETTERS, 2023, 55 (02) : 949 - 968
  • [10] Adaptive Meta Transfer Learning with Efficient Self-Attention for Few-Shot Bearing Fault Diagnosis
    Jun Zhao
    Tang Tang
    Ying Yu
    Jingwei Wang
    Tianyuan Yang
    Ming Chen
    Jie Wu
    Neural Processing Letters, 2023, 55 : 949 - 968