Bidirectional generative transductive zero-shot learning

被引:10
|
作者
Li, Xinpeng [1 ]
Zhang, Dan [1 ]
Ye, Mao [1 ]
Li, Xue [2 ]
Dou, Qiang [1 ]
Lv, Qiao [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld 4072, Australia
来源
NEURAL COMPUTING & APPLICATIONS | 2021年 / 33卷 / 10期
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Zero-shot learning; Transductive; Bidirectional generation; CycleGAN;
D O I
10.1007/s00521-020-05322-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most zero-shot learning (ZSL) methods aim to learn a mapping from visual feature space to semantic feature space or from both visual and semantic feature spaces to a common joint space and align them. However, in these methods the visual and semantic information are not utilized sufficiently and the useless information is not excluded. Moreover, there exists a strong bias problem that the instances from unseen classes always tend to be predicted as some seen classes in most ZSL methods. In this paper, combining the advantages of generative adversarial networks (GANs), a method based on bidirectional projections between the visual and semantic feature spaces is proposed. GANs are used to perform bidirectional generations and alignments between the visual and semantic features. In addition, cycle mapping structure ensures that the important information are kept in the alignments. Furthermore, in order to better solve the bias problem, pseudo-labels are generated for unseen instances and the model is adjusted along with them iteratively. We conduct extensive experiments at traditional ZSL and generalized ZSL settings, respectively. Experiment results confirm that our method achieves the state-of-the-art performances on the popular datasets AWA2, aPY and SUN.
引用
收藏
页码:5313 / 5326
页数:14
相关论文
共 50 条
  • [1] Bidirectional generative transductive zero-shot learning
    Xinpeng Li
    Dan Zhang
    Mao Ye
    Xue Li
    Qiang Dou
    Qiao Lv
    [J]. Neural Computing and Applications, 2021, 33 : 5313 - 5326
  • [2] ROBUST BIDIRECTIONAL GENERATIVE NETWORK FOR GENERALIZED ZERO-SHOT LEARNING
    Xing, Yun
    Huang, Sheng
    Huangfu, Luwen
    Chen, Feiyu
    Ge, Yongxin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [3] Transductive zero-shot learning with generative model-driven structure alignment
    Liu, Yang
    Tao, Keda
    Tian, Tianhui
    Gao, Xinbo
    Han, Jungong
    Shao, Ling
    [J]. PATTERN RECOGNITION, 2024, 153
  • [4] Adversarial strategy for transductive zero-shot learning
    Liu, Youfa
    Du, Bo
    Ni, Fuchuan
    [J]. INFORMATION SCIENCES, 2021, 578 : 750 - 761
  • [5] Transductive Learning for Zero-Shot Object Detection
    Rahman, Shafin
    Khan, Salman
    Barnes, Nick
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6081 - 6090
  • [6] Holistically Associated Transductive Zero-Shot Learning
    Xu, Yangyang
    Xu, Xuemiao
    Han, Guoqiang
    He, Shengfeng
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (02) : 437 - 447
  • [7] Transductive Unbiased Embedding for Zero-Shot Learning
    Song, Jie
    Shen, Chengchao
    Yang, Yezhou
    Liu, Yang
    Song, Mingli
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1024 - 1033
  • [8] Transductive Zero-Shot Learning by Decoupled Feature Generation
    Marmoreo, Federico
    Cavazza, Jacopo
    Murino, Vittorio
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3108 - 3117
  • [9] Transductive Zero-Shot Learning With Adaptive Structural Embedding
    Yu, Yunlong
    Ji, Zhong
    Guo, Jichang
    Pang, Yanwei
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (09) : 4116 - 4127
  • [10] Transductive Multi-View Zero-Shot Learning
    Fu, Yanwei
    Hospedales, Timothy M.
    Xiang, Tao
    Gong, Shaogang
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (11) : 2332 - 2345