Incremental Embedding Learning via Zero-Shot Translation

被引:0
|
作者
Wei, Kun [1 ]
Deng, Cheng [1 ]
Yang, Xu [1 ]
Li, Maosen [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modern deep learning methods have achieved great success in machine learning and computer vision fields by learning a set of pre-defined datasets. Howerver, these methods perform unsatisfactorily when applied into real-world situations. The reason of this phenomenon is that learning new tasks leads the trained model quickly forget the knowledge of old tasks, which is referred to as catastrophic forgetting. Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks and ignore the problem existing in embedding networks, which are the basic networks for image retrieval, face recognition, zero-shot learning, etc. Different from traditional incremental classification networks, the semantic gap between the embedding spaces of two adjacent tasks is the main challenge for embedding networks under incremental learning setting. Thus, we propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI), which leverages zero-shot translation to estimate the semantic gap without any exemplars. Then, we try to learn a unified representation for two adjacent tasks in sequential learning process, which captures the relationships of previous classes and current classes precisely. In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks. We conduct extensive experiments on CUB-200-2011 and CIFAR100, and the experiment results prove the effectiveness of our method. The code of our method has been released in https://github.com/Drkun/ZSTCI.
引用
收藏
页码:10254 / 10262
页数:9
相关论文
共 50 条
  • [31] Domain-Oriented Semantic Embedding for Zero-Shot Learning
    Min, Shaobo
    Yao, Hantao
    Xie, Hongtao
    Zha, Zheng-Jun
    Zhang, Yongdong
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 3919 - 3930
  • [32] Zero-Shot Learning With Attentive Region Embedding and Enhanced Semantics
    Liu, Yang
    Dang, Yuhao
    Gao, Xinbo
    Han, Jungong
    Shao, Ling
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (03) : 4220 - 4231
  • [33] Leveraging Balanced Semantic Embedding for Generative Zero-Shot Learning
    Xie, Guo-Sen
    Zhang, Xu-Yao
    Xiang, Tian-Zhu
    Zhao, Fang
    Zhang, Zheng
    Shao, Ling
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 9575 - 9582
  • [34] Exploring Attribute Space with Word Embedding for Zero-shot Learning
    Zhang, Zhaocheng
    Yang, Gang
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [35] Discriminative Embedding Autoencoder With a Regressor Feedback for Zero-Shot Learning
    Shi, Ying
    Wei, Wei
    [J]. IEEE ACCESS, 2020, 8 : 11019 - 11030
  • [36] Zero-Shot Leaning with Manifold Embedding
    Yu, Yun-long
    Ji, Zhong
    Pang, Yan-wei
    [J]. INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING, 2018, 11266 : 135 - 147
  • [37] Manifold embedding for zero-shot recognition
    Ji, Zhong
    Yu, Xuejie
    Yu, Yunlong
    He, Yuqing
    [J]. COGNITIVE SYSTEMS RESEARCH, 2019, 55 : 34 - 43
  • [38] Zero-Shot Object Detection via Learning an Embedding from Semantic Space to Visual Space
    Zhang, Licheng
    Wang, Xianzhi
    Yao, Lina
    Wu, Lin
    Zheng, Feng
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 906 - 912
  • [39] Automatic metrics learning with low-noise embedding for zero-shot learning
    Lu, Zi-Qian
    Lu, Zhe-Ming
    [J]. ELECTRONICS LETTERS, 2019, 55 (16) : 887 - +
  • [40] Zero-Shot Stance Detection via Contrastive Learning
    Liang, Bin
    Chen, Zixiao
    Gui, Lin
    He, Yulan
    Yang, Min
    Xu, Ruifeng
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 2738 - 2747