Less is more: A closer look at semantic-based few-shot learning

被引:0
|
作者
Zhou, Chunpeng [1 ]
Yu, Zhi [2 ]
Yuan, Xilu [1 ]
Zhou, Sheng [2 ]
Bu, Jiajun [1 ]
Wang, Haishuai [1 ,3 ]
机构
[1] Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science, Zhejiang University, Hangzhou,310000, China
[2] School of Software Technology, Zhejiang University, Ningbo,310027, China
[3] Shanghai Artificial Intelligence Laboratory, Shanghai,200125, China
基金
中国国家自然科学基金;
关键词
Adversarial machine learning - Contrastive Learning - Federated learning - Self-supervised learning;
D O I
10.1016/j.inffus.2024.102672
中图分类号
学科分类号
摘要
Few-shot Learning (FSL) aims to learn and distinguish new categories from a scant number of available samples, presenting a significant challenge in the realm of deep learning. Recent researchers have sought to leverage the additional semantic or linguistic information of scarce categories with a pre-trained language model to facilitate learning, thus partially alleviating the problem of insufficient supervision signals. Nonetheless, the full potential of the semantic information and pre-trained language model have been underestimated in the few-shot learning till now, resulting in limited performance enhancements. To address this, we propose a straightforward and efficacious framework for few-shot learning tasks, specifically designed to exploit the semantic information and language model. Specifically, we explicitly harness the zero-shot capability of the pre-trained language model with learnable prompts. And we directly add the visual feature with the textual feature for inference without the intricate designed fusion modules as in prior studies. Additionally, we apply the self-ensemble and distillation to further enhance performance. Extensive experiments conducted across four widely used few-shot datasets demonstrate that our simple framework achieves impressive results. Particularly noteworthy is its outstanding performance in the 1-shot learning task, surpassing the current state-of-the-art by an average of 3.3% in classification accuracy. Our code will be available at https://github.com/zhouchunpong/SimpleFewShot. © 2024 Elsevier B.V.
引用
收藏
相关论文
共 50 条
  • [41] Learning Meta-class Memory for Few-Shot Semantic Segmentation
    Wu, Zhonghua
    Shi, Xiangxi
    Lin, Guosheng
    Cai, Jianfei
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 497 - 506
  • [42] Few-Shot Lifelong Learning
    Mazumder, Pratik
    Singh, Pravendra
    Rai, Piyush
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 2337 - 2345
  • [43] Differentiable Meta-Learning Model for Few-Shot Semantic Segmentation
    Tian, Pinzhuo
    Wu, Zhangkai
    Qi, Lei
    Wang, Lei
    Shi, Yinghuan
    Gao, Yang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 12087 - 12094
  • [44] Quaternion-Valued Correlation Learning for Few-Shot Semantic Segmentation
    Zheng, Zewen
    Huang, Guoheng
    Yuan, Xiaochen
    Pun, Chi-Man
    Liu, Hongrui
    Ling, Wing-Kuen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (05) : 2102 - 2115
  • [45] VSA: Adaptive Visual and Semantic Guided Attention on Few-Shot Learning
    Chai, Jin
    Chen, Yisheng
    Shen, Weinan
    Zhang, Tong
    Chen, C. L. Philip
    ARTIFICIAL INTELLIGENCE, CICAI 2022, PT I, 2022, 13604 : 280 - 292
  • [46] SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning
    Yang, Fengyuan
    Wang, Ruiping
    Chen, Xilin
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1586 - 1596
  • [47] Few-shot classification with task-adaptive semantic feature learning
    Pan, Mei-Hong
    Xin, Hong-Yi
    Xia, Chun-Qiu
    Shen, Hong -Bin
    PATTERN RECOGNITION, 2023, 141
  • [48] Learning Non-target Knowledge for Few-shot Semantic Segmentation
    Liu, Yuanwei
    Liu, Nian
    Cao, Qinglong
    Yao, Xiwen
    Han, Junwei
    Shao, Ling
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11563 - 11572
  • [49] Few-Shot Adaptation for Multimedia Semantic Indexing
    Inoue, Nakamasa
    Shinoda, Koichi
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1110 - 1118
  • [50] Few-Shot Semantic Parsing for New Predicates
    Li, Zhuang
    Qu, Lizhen
    Huang, Shuo
    Haffari, Gholamreza
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 1281 - 1291