Variational Disentangle Zero-Shot Learning

被引:0
|
作者
Su, Jie [1 ]
Wan, Jinhao [2 ]
Li, Taotao [2 ]
Li, Xiong [2 ]
Ye, Yuheng [2 ]
机构
[1] Newcastle Univ, Sch Comp, Newcastle Upon Tyne NE4 5TG, England
[2] Zhejiang Univ Technol, ISPNU Lab, Hangzhou 310023, Peoples R China
关键词
zero-shot learning; computer science; pattern recognition; deep learning;
D O I
10.3390/math11163578
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Existing zero-shot learning (ZSL) methods typically focus on mapping from the feature space (e.g., visual space) to class-level attributes, often leading to a non-injective projection. Such a mapping may cause a significant loss of instance-level information. While an ideal projection to instance-level attributes would be desirable, it can also be prohibitively expensive and thus impractical in many scenarios. In this work, we propose a variational disentangle zero-shot learning (VDZSL) framework that addresses this problem by constructing variational instance-specific attributes from a class-specific semantic latent distribution. Specifically, our approach disentangles each instance into class-specific attributes and the corresponding variant features. Unlike transductive ZSL, which assumes that unseen classes' attributions are known beforehand, our VDZSL method does not rely on this strong assumption, making it more applicable in real-world scenarios. Extensive experiments conducted on three popular ZSL benchmark datasets (i.e., AwA2, CUB, and FLO) validate the effectiveness of our approach. In the conventional ZSL setting, our method demonstrates an improvement of 12 similar to 15% relative to the advanced approaches and achieves a classification accuracy of 70% on the AwA2 dataset. Furthermore, under the more challenging generalized ZSL setting, our approach can gain an improvement of 5 similar to 15% compared with the advanced methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Semantic Autoencoder for Zero-Shot Learning
    Kodirov, Elyor
    Xiang, Tao
    Gong, Shaogang
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4447 - 4456
  • [42] Prototype rectification for zero-shot learning
    Yi, Yuanyuan
    Zeng, Guolei
    Ren, Bocheng
    Yang, Laurence T.
    Chai, Bin
    Li, Yuxin
    PATTERN RECOGNITION, 2024, 156
  • [43] Zero-Shot Program Representation Learning
    Cui, Nan
    Jiang, Yuze
    Gu, Xiaodong
    Shen, Beijun
    arXiv, 2022,
  • [44] Zero-shot Learning With Fuzzy Attribute
    Liu, Chongwen
    Shang, Zhaowei
    Tang, Yuan Yan
    2017 3RD IEEE INTERNATIONAL CONFERENCE ON CYBERNETICS (CYBCONF), 2017, : 277 - 282
  • [45] Detecting Errors with Zero-Shot Learning
    Wu, Xiaoyu
    Wang, Ning
    ENTROPY, 2022, 24 (07)
  • [46] Zero-Shot Semantic Segmentation via Variational Mapping
    Kato, Naoki
    Yamasaki, Toshihiko
    Aizawa, Kiyoharu
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1363 - 1370
  • [47] Landmark Selection for Zero-shot Learning
    Guo, Yuchen
    Ding, Guiguang
    Han, Jungong
    Yan, Chenggang
    Zhang, Jiyong
    Dai, Qionghai
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2435 - 2441
  • [48] Recent Advances in Zero-Shot Learning
    Lan Hong
    Fang Zhiyu
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2020, 42 (05) : 1188 - 1200
  • [49] Audio-Visual Generalized Zero-Shot Learning Based on Variational Information Bottleneck
    Li, Yapeng
    Luo, Yong
    Du, Bo
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 450 - 455
  • [50] Multi-label Generalized Zero-Shot Learning Using Identifiable Variational Autoencoders
    Gull, Muqaddas
    Arif, Omar
    EXTENDED REALITY, XR SALENTO 2023, PT II, 2023, 14219 : 35 - 50