Variational Disentangle Zero-Shot Learning

被引:0
|
作者
Su, Jie [1 ]
Wan, Jinhao [2 ]
Li, Taotao [2 ]
Li, Xiong [2 ]
Ye, Yuheng [2 ]
机构
[1] Newcastle Univ, Sch Comp, Newcastle Upon Tyne NE4 5TG, England
[2] Zhejiang Univ Technol, ISPNU Lab, Hangzhou 310023, Peoples R China
关键词
zero-shot learning; computer science; pattern recognition; deep learning;
D O I
10.3390/math11163578
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Existing zero-shot learning (ZSL) methods typically focus on mapping from the feature space (e.g., visual space) to class-level attributes, often leading to a non-injective projection. Such a mapping may cause a significant loss of instance-level information. While an ideal projection to instance-level attributes would be desirable, it can also be prohibitively expensive and thus impractical in many scenarios. In this work, we propose a variational disentangle zero-shot learning (VDZSL) framework that addresses this problem by constructing variational instance-specific attributes from a class-specific semantic latent distribution. Specifically, our approach disentangles each instance into class-specific attributes and the corresponding variant features. Unlike transductive ZSL, which assumes that unseen classes' attributions are known beforehand, our VDZSL method does not rely on this strong assumption, making it more applicable in real-world scenarios. Extensive experiments conducted on three popular ZSL benchmark datasets (i.e., AwA2, CUB, and FLO) validate the effectiveness of our approach. In the conventional ZSL setting, our method demonstrates an improvement of 12 similar to 15% relative to the advanced approaches and achieves a classification accuracy of 70% on the AwA2 dataset. Furthermore, under the more challenging generalized ZSL setting, our approach can gain an improvement of 5 similar to 15% compared with the advanced methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] LVQ Treatment for Zero-Shot Learning
    Ismailoglu, Firat
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2023, 31 (01) : 216 - 237
  • [32] Attribute subspaces for zero-shot learning
    Zhou, Lei
    Liu, Yang
    Bai, Xiao
    Li, Na
    Yu, Xiaohan
    Zhou, Jun
    Hancock, Edwin R.
    PATTERN RECOGNITION, 2023, 144
  • [33] A review on multimodal zero-shot learning
    Cao, Weipeng
    Wu, Yuhao
    Sun, Yixuan
    Zhang, Haigang
    Ren, Jin
    Gu, Dujuan
    Wang, Xingkai
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2023, 13 (02)
  • [34] Zero-Shot Learning with Attribute Selection
    Guo, Yuchen
    Ding, Guiguang
    Han, Jungong
    Tang, Sheng
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 6870 - 6877
  • [35] Research and Development on Zero-Shot Learning
    Zhang L.-N.
    Zuo X.
    Liu J.-W.
    Zidonghua Xuebao/Acta Automatica Sinica, 2020, 46 (01): : 1 - 23
  • [36] Synthesizing Samples for Zero-shot Learning
    Guo, Yuchen
    Ding, Guiguang
    Han, Jungong
    Gao, Yue
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1774 - 1780
  • [37] Towards Open Zero-Shot Learning
    Marmoreo, Federico
    Carrazco, Julio Ivan Davila
    Cavazza, Jacopo
    Murino, Vittorio
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT II, 2022, 13232 : 564 - 575
  • [38] Zero-Shot Compositional Concept Learning
    Xu, Guangyue
    Kordjamshidi, Parisa
    Chai, Joyce Y.
    1ST WORKSHOP ON META LEARNING AND ITS APPLICATIONS TO NATURAL LANGUAGE PROCESSING (METANLP 2021), 2021, : 19 - 27
  • [39] Improving Zero-Shot Generalization for CLIP with Variational Adapter
    Lu, Ziqian
    Shen, Fengli
    Liu, Mushui
    Yu, Yunlong
    Li, Xi
    COMPUTER VISION - ECCV 2024, PT XX, 2025, 15078 : 328 - 344
  • [40] Variational Autoencoder for Zero-Shot Recognition of Bai Characters
    Lin, Weiwei
    Ma, Tai
    Zhang, Zeqing
    Li, Xiaofan
    Xue, Xingsi
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022