Reinforced Self-Supervised Training for Few-Shot Learning

被引:1
|
作者
Yan, Zhichao [1 ]
An, Yuexuan [2 ]
Xue, Hui [1 ]
机构
[1] Southeast Univ, Sch Comp Sci & Engn, Nanjing 211189, Peoples R China
[2] Southeast Univ, Key Lab New Generat Artificial Intelligence Techno, Minist Educ, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Feature extraction; Task analysis; Self-supervised learning; Adaptation models; Supervised learning; Reinforcement learning; Few-shot learning; reinforcement learning; self-supervised learning; NETWORK;
D O I
10.1109/LSP.2024.3370488
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Few-shot learning is an open problem to learning a new concept with little supervision from limited labeled data. As an alternative knowledge for few-shot learning, self-supervised learning can extract supervisory signals directly from unlabeled data. However, existing self-supervised few-shot methods which directly take the summation of two tasks, have two fundamental bottlenecks: 1) representation bias: how to extract efficacious supervisory signals in self-supervision and eliminate the disturbance of undesirable shortcuts with limited examples and 2) objective conflict: how to adaptively trade-off the self-supervision and supervision to achieve the optimal model performance. To address the above problems, in this paper, we propose a novel approach named ReInforced SElf-supervised training (RISE) for few-shot learning. RISE leverages agent-relative supervision to eliminate the undesirable shortcut learning of self-supervised training. Meanwhile, it dynamically explores the balance between supervisory signals from self-supervised tasks and inherent supervision from few-shot tasks to avoid the trade-off dilemma. Therefore, the new pattern for training self-supervision can be resilient to few-shot learning and enhance the performance for few-shot identification. Extensive experiments on several public benchmark datasets verify the effectiveness of our approach.
引用
收藏
页码:731 / 735
页数:5
相关论文
共 50 条
  • [21] A CONTRASTIVE SELF-SUPERVISED LEARNING SCHEME FOR BEAT TRACKING AMENABLE TO FEW-SHOT LEARNING
    Gagnere, Antonin
    Essid, Slim
    Peeters, Geoffroy
    [J]. arXiv,
  • [22] Meta Self-Supervised Learning for Distribution Shifted Few-Shot Scene Classification
    Gong, Tengfei
    Zheng, Xiangtao
    Lu, Xiaoqiang
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [23] SSL-ProtoNet: Self-supervised Learning Prototypical Networks for few-shot learning
    Lim, Jit Yan
    Lim, Kian Ming
    Lee, Chin Poo
    Tan, Yong Xuan
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [24] MaskSplit: Self-supervised Meta-learning for Few-shot Semantic Segmentation
    Amac, Mustafa Sercan
    Sencan, Ahmet
    Baran, Orhun Bugra
    Ikizler-Cinbis, Nazli
    Cinbis, Ramazan Gokberk
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 428 - 438
  • [25] Few-shot symbol classification via self-supervised learning and nearest neighbor
    Alfaro-Contreras, Maria
    Rios-Vila, Antonio
    Valero-Mas, Jose J.
    Calvo-Zaragoza, Jorge
    [J]. PATTERN RECOGNITION LETTERS, 2023, 167 : 1 - 8
  • [26] Self-supervised Prototype Conditional Few-Shot Object Detection
    Kobayashi, Daisuke
    [J]. IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT II, 2022, 13232 : 681 - 692
  • [27] Multi-task Self-supervised Few-Shot Detection
    Zhang, Guangyong
    Duan, Lijuan
    Wang, Wenjian
    Gong, Zhi
    Ma, Bian
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII, 2024, 14436 : 107 - 119
  • [28] Self-Supervised Approach for Few-shot Hand Gesture Recognition
    Kimura, Naoki
    [J]. ADJUNCT PROCEEDINGS OF THE 35TH ACM SYMPOSIUM ON USER INTERFACE SOFTWARE & TECHNOLOGY, UIST 2022, 2022,
  • [29] SELF-SUPERVISED CLASS-COGNIZANT FEW-SHOT CLASSIFICATION
    Shirekar, Ojas Kishore
    Jamali-Rad, Hadi
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 976 - 980
  • [30] Self-Supervised Task Augmentation for Few-Shot Intent Detection
    Peng-Fei Sun
    Ya-Wen Ouyang
    Ding-Jie Song
    Xin-Yu Dai
    [J]. Journal of Computer Science and Technology, 2022, 37 : 527 - 538