A Multitask Latent Feature Augmentation Method for Few-Shot Learning

被引:0
|
作者
Xu, Jian [1 ]
Liu, Bo [1 ]
Xiao, Yanshan [2 ]
机构
[1] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
[2] Guangdong Univ Technol, Sch Comp, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Semantics; Training; Robustness; Data models; Strain; Optimization; Feature map weight; few-shot learning (FSL); latent feature augmentation (LFA); multitask (MT);
D O I
10.1109/TNNLS.2022.3213576
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning (FSL) aims to learn novel concepts quickly from a few novel labeled samples with the transferable knowledge learned from base dataset. The existing FSL methods usually treat each sample as a single feature point in embedding space and classify through one single comparison task. However, the few-shot single feature points on the novel meta-testing episode are still vulnerable to noise easily although with the good transferable knowledge, because the novel categories are never seen on base dataset. Besides, the existing FSL models are trained by only one single comparison task and ignore that different semantic feature maps have different weights on different comparison objects and tasks, which cannot take full advantage of the valuable information from different multiple comparison tasks and objects to make the latent features (LFs) more robust based on only few-shot samples. In this article, we propose a novel multitask LF augmentation (MTLFA) framework to learn the meta-knowledge of generalizing key intraclass and distinguishable interclass sample features from only few-shot samples through an LF augmentation (LFA) module and a multitask (MT) framework. Our MTLFA treats the support features as sampling from the class-specific LF distribution, enhancing the diversity of support features and reducing the impact of noise based on few-shot support samples. Furthermore, an MT framework is introduced to obtain more valuable comparison-task-related and episode-related comparison information from multiple different comparison tasks in which different semantic feature maps have different weights, adjusting the prior LFs and generating the more robust and effective episode-related classifier. Besides, we analyze the feasibility and effectiveness of MTLFA from theoretical views based on the Hoeffding's inequality and the Chernoff's bounding method. Extensive experiments conducted on three benchmark datasets demonstrate that the MTLFA achieves the state-of-the-art performance in FSL. The experimental results verify our theoretical analysis and the effectiveness and robustness of MTLFA framework in FSL.
引用
收藏
页码:6976 / 6990
页数:15
相关论文
共 50 条
  • [1] Few-Shot Learning With Enhancements to Data Augmentation and Feature Extraction
    Zhang, Yourun
    Gong, Maoguo
    Li, Jianzhao
    Feng, Kaiyuan
    Zhang, Mingyang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [2] A Feature Extraction Method Based on Few-shot Learning
    Liu, Sa
    Pang, Shanmin
    Zhu, Li
    Zhao, Jiakun
    [J]. 2020 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND COMPUTER ENGINEERING (ICAICE 2020), 2020, : 528 - 532
  • [3] Few-Shot Charge Prediction with Data Augmentation and Feature Augmentation
    Wang, Peipeng
    Zhang, Xiuguo
    Cao, Zhiying
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (22):
  • [4] Improving Augmentation Efficiency for Few-Shot Learning
    Cho, Wonhee
    Kim, Eunwoo
    [J]. IEEE ACCESS, 2022, 10 : 17697 - 17706
  • [5] FEATURE AUGMENTATION LEARNING FOR FEW-SHOT PALMPRINT IMAGE RECOGNITION WITH UNCONSTRAINED ACQUISITION
    Jing, Kunlei
    Zhang, Xinman
    Yang, Zhiyuan
    Wen, Bihan
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3323 - 3327
  • [6] Feature Transformation Network for Few-Shot Learning
    Wang, Xiaoyan
    Wang, Hongmei
    Zhou, Daming
    [J]. IEEE ACCESS, 2021, 9 : 41913 - 41924
  • [7] Tensor feature hallucination for few-shot learning
    Lazarou, Michalis
    Stathaki, Tania
    Avrithis, Yannis
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2050 - 2060
  • [8] Feature Augmentation Reconstruction Network for Few-Shot Image Classification
    Li, Zhen
    Wang, Lang
    An, Wenjuan
    Qi, Song
    Li, Xiaoxu
    Fei, Xuezhi
    [J]. 2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 1571 - 1578
  • [9] Few-shot learning through contextual data augmentation
    Arthaud, Farid
    Bawden, Rachel
    Birch, Alexandra
    [J]. 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 1049 - 1062
  • [10] Few-Shot Learning Method for Multi-Scale Feature Aggregation
    Zeng, Wu
    Mao, Guojun
    [J]. Computer Engineering and Applications, 2023, 59 (15) : 151 - 159