Contrastive Prototype-Guided Generation for Generalized Zero-Shot Learning

被引:1
|
作者
Wang, Yunyun [1 ]
Mao, Jian [1 ]
Guo, Chenguang [1 ]
Chen, Songcan [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Nanjing, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut, Nanjing, Peoples R China
关键词
Zero-shot learning; Generative adversarial network; Contrastive prototype; Feature diversity;
D O I
10.1016/j.neunet.2024.106324
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generalized zero -shot learning (GZSL) aims to recognize both seen and unseen classes, while only samples from seen classes are available for training. The mainstream methods mitigate the lack of unseen training data by simulating the visual unseen samples. However, the sample generator is actually learned with just seen -class samples, and semantic descriptions of unseen classes are just provided to the pre -trained sample generator for unseen data generation, therefore, the generator would have bias towards seen categories, and the unseen generation quality, including both precision and diversity, is still the main learning challenge. To this end, we propose a Prototype -Guided Generation for Generalized Zero -Shot Learning (PGZSL), in order to guide the sample generation with unseen knowledge. First, unseen data generation is guided and rectified in PGZSL by contrastive prototypical anchors with both class semantic consistency and feature discriminability. Second, PGZSL introduces Certainty -Driven Mixup for generator to enrich the diversity of generated unseen samples, while suppress the generation of uncertain boundary samples as well. Empirical results over five benchmark datasets show that PGZSL significantly outperforms the SOTA methods in both ZSL and GZSL tasks.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Contrastive Embedding for Generalized Zero-Shot Learning
    Han, Zongyan
    Fu, Zhenyong
    Chen, Shuo
    Yang, Jian
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2371 - 2381
  • [2] Inference guided feature generation for generalized zero-shot learning
    Han, Zongyan
    Fu, Zhenyong
    Li, Guangyu
    Yang, Jian
    NEUROCOMPUTING, 2021, 430 : 150 - 158
  • [3] Transferable Contrastive Network for Generalized Zero-Shot Learning
    Jiang, Huajie
    Wang, Ruiping
    Shan, Shiguang
    Chen, Xilin
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9764 - 9773
  • [4] Semantic Contrastive Embedding for Generalized Zero-Shot Learning
    Zongyan Han
    Zhenyong Fu
    Shuo Chen
    Jian Yang
    International Journal of Computer Vision, 2022, 130 : 2606 - 2622
  • [5] Semantic Contrastive Embedding for Generalized Zero-Shot Learning
    Han, Zongyan
    Fu, Zhenyong
    Chen, Shuo
    Yang, Jian
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (11) : 2606 - 2622
  • [6] Contrastive embedding-based feature generation for generalized zero-shot learning
    Wang, Han
    Zhang, Tingting
    Zhang, Xiaoxuan
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (05) : 1669 - 1681
  • [7] Contrastive embedding-based feature generation for generalized zero-shot learning
    Han Wang
    Tingting Zhang
    Xiaoxuan Zhang
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 1669 - 1681
  • [8] Contrastive visual feature filtering for generalized zero-shot learning
    Meng, Shixuan
    Jiang, Rongxin
    Tian, Xiang
    Zhou, Fan
    Chen, Yaowu
    Liu, Junjie
    Shen, Chen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024,
  • [9] Generation-based contrastive model with semantic alignment for generalized zero-shot learning
    Yang, Jingqi
    Shen, Qi
    Xie, Cheng
    IMAGE AND VISION COMPUTING, 2023, 137
  • [10] Dual Progressive Prototype Network for Generalized Zero-Shot Learning
    Wang, Chaoqun
    Mina, Shaobo
    Chenl, Xuejin
    Sun, Xiaoyan
    Li, Houqiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34