Self-Supervison with data-augmentation improves few-shot learning

被引:0
|
作者
Kumar, Prashant [1 ]
Toshniwal, Durga [1 ]
机构
[1] Indian Inst Technol, Dept Comp Sci & Engn, Roorkee 247667, Uttar Pradesh, India
关键词
Few-shot learning; Image classification; Self-supervision; Novel visual categories; Meta-learning; Auxiliary task; Neural network; CLASSIFICATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervision learning (SSL) has shown exceptionally promising results in natural language processing and, more recently, in image classification and recognition. Recent research works have demonstrated SSL's benefits on large unlabeled datasets. However, relatively little investigation has been done into how well it works with smaller datasets. Typically, this challenge entails training a model on a very small quantity of data and then evaluating the model on out-of-distribution data. Few-shot image classification aims to classify classes that haven't been seen before using a limited number of training examples. Recent few-shot learning research focuses on developing good representation models that can quickly adapt to test tasks. In this paper, we investigate the role of self-supervision in the context of few-shot learning. We devised a model that improves the network's representation learning by employing a self-supervised auxiliary task that is based on composite rotation. We propose a composite rotation-based auxiliary task that rotates the image on two levels: inner and outer, and assigns one of 16 rotation classes to the modified image. Then, we further trained our model, which enables us to capture the robust learnable features that assist in focusing on better visual details of an object present in the given image. We find that the network is able to learn to extract more generalized and discriminative features, which in turn helps to enhance the effectiveness of its few-shot classification. This approach significantly outperforms the state-of-the-art on several public benchmarks. In addition, we demonstrated empirically that models trained using the proposed approach perform better than the baseline model even when the query examples in the episode are not aligned with the support examples. Extensive ablation experiments are performed to validate the various components of our approach. We also investigate our strategy's impact on the network's ability to discriminate visual features.Graphical abstractGraphical Abstract
引用
收藏
页码:2976 / 2997
页数:22
相关论文
共 50 条
  • [1] Self-Supervison with data-augmentation improves few-shot learning
    Prashant Kumar
    Durga Toshniwal
    [J]. Applied Intelligence, 2024, 54 (4) : 2976 - 2997
  • [2] Few-shot learning through contextual data augmentation
    Arthaud, Farid
    Bawden, Rachel
    Birch, Alexandra
    [J]. 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 1049 - 1062
  • [3] Few-Shot Learning With Enhancements to Data Augmentation and Feature Extraction
    Zhang, Yourun
    Gong, Maoguo
    Li, Jianzhao
    Feng, Kaiyuan
    Zhang, Mingyang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [4] FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
    Zhou, Jing
    Zheng, Yanan
    Tang, Jie
    Li, Jian
    Yang, Zhilin
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 8646 - 8665
  • [5] Improving Augmentation Efficiency for Few-Shot Learning
    Cho, Wonhee
    Kim, Eunwoo
    [J]. IEEE ACCESS, 2022, 10 : 17697 - 17706
  • [6] Few-shot Partial Multi-label Learning with Data Augmentation
    Sun, Yifan
    Zhao, Yunfeng
    Yu, Guoxian
    Yan, Zhongmin
    Domeniconi, Carlotta
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 478 - 487
  • [7] Data Augmentation Aided Few-Shot Learning for Specific Emitter Identification
    Zhang, Xixi
    Wang, Yu
    Zhang, Yibin
    Lin, Yun
    Gui, Guan
    Tomoaki, Ohtsuki
    Sari, Hikmet
    [J]. 2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [8] Few-Shot Charge Prediction with Data Augmentation and Feature Augmentation
    Wang, Peipeng
    Zhang, Xiuguo
    Cao, Zhiying
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (22):
  • [9] STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
    Tu Vu
    Minh-Thang Luong
    Le, Quoc, V
    Simon, Grady
    Iyyer, Mohit
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 5715 - 5731
  • [10] Ortho-Shot: Low Displacement Rank Regularization with Data Augmentation for Few-Shot Learning
    Osahor, Uche
    Nasrabadi, Nasser M.
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2040 - 2049