Three-Dimension Attention Mechanism and Self-Supervised Pretext Task for Augmenting Few-Shot Learning

被引:0
|
作者
Liang, Yong [1 ]
Chen, Zetao [1 ]
Lin, Daoqian [1 ]
Tan, Junwen [1 ]
Yang, Zhenhao [1 ]
Li, Jie [1 ]
Li, Xinhai [1 ]
机构
[1] Guilin Univ Technol, Coll Mech & Control Engn, Guilin 541006, Peoples R China
关键词
Few-shot; self-supervised pretext task learning; deep learning; image classification; attention mechanism;
D O I
10.1109/ACCESS.2023.3285721
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The main challenge of few-shot learning lies in the limited labeled sample of data. In addition, since image-level labels are usually not accurate in describing the features of images, it leads to difficulty for the model to have good generalization ability and robustness. This problem has not been well solved yet, and existing metric-based methods still have room for improvement. To address this issue, we propose a few-shot learning method based on a three-dimension attention mechanism and self-supervised learning. The attention module is used to extract more representative features by focusing on more semantically informative features through spatial and channel attention. Self-supervised learning mainly adopts a proxy task of rotation transformation, which increases semantic information without requiring additional manual labeling, and uses this information for training in combination with supervised learning loss function to improve model robustness. We have conducted extensive experiments on four popular few-shot datasets and achieved state-of-the-art performance in both 5-shot and 1-shot scenarios. Experiment results show that our work provides a novel and remarkable approach to few-shot learning.
引用
收藏
页码:59428 / 59437
页数:10
相关论文
共 50 条
  • [21] Self-Supervised and Few-Shot Contrastive Learning Frameworks for Text Clustering
    Shi, Haoxiang
    Sakai, Tetsuya
    [J]. IEEE ACCESS, 2023, 11 : 84134 - 84143
  • [22] A Self-Supervised Deep Learning Framework for Unsupervised Few-Shot Learning and Clustering
    Zhang, Hongjing
    Zhan, Tianyang
    Davidson, Ian
    [J]. Pattern Recognition Letters, 2021, 148 : 75 - 81
  • [23] Unsupervised Few-Shot Feature Learning via Self-Supervised Training
    Ji, Zilong
    Zou, Xiaolong
    Huang, Tiejun
    Wu, Si
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2020, 14
  • [24] FLUID: Few-Shot Self-Supervised Image Deraining
    Rai, Shyam Nandan
    Saluja, Rohit
    Arora, Chetan
    Balasubramanian, Vineeth N.
    Subramanian, Anbumani
    Jawahar, C., V
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 418 - 427
  • [25] Few-shot defect recognition of metal surfaces via attention-embedding and self-supervised learning
    Liu, Zhenyu
    Song, Yiguo
    Tang, Ruining
    Duan, Guifang
    Tan, Jianrong
    [J]. JOURNAL OF INTELLIGENT MANUFACTURING, 2023, 34 (08) : 3507 - 3521
  • [26] Few-shot defect recognition of metal surfaces via attention-embedding and self-supervised learning
    Zhenyu Liu
    Yiguo Song
    Ruining Tang
    Guifang Duan
    Jianrong Tan
    [J]. Journal of Intelligent Manufacturing, 2023, 34 : 3507 - 3521
  • [27] Self-supervised Network Evolution for Few-shot Classification
    Tang, Xuwen
    Teng, Zhu
    Zhang, Baopeng
    Fan, Jianping
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3045 - 3051
  • [28] MLDAC:Multi-Task Dense Attention Computation Self-Supervised Few-Shot Semantic Segmentation Method
    Wang, Weihang
    Zhang, Yi
    [J]. Computer Engineering and Applications, 2025, 61 (04) : 211 - 221
  • [29] A Survey of Self-Supervised and Few-Shot Object Detection
    Huang, Gabriel
    Laradji, Issam
    Vazquez, David
    Lacoste-Julien, Simon
    Rodriguez, Pau
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4071 - 4089
  • [30] SFT: Few-Shot Learning via Self-Supervised Feature Fusion With Transformer
    Lim, Jit Yan
    Lim, Kian Ming
    Lee, Chin Poo
    Tan, Yong Xuan
    [J]. IEEE ACCESS, 2024, 12 : 86690 - 86703