Adversarial Zero-Shot Learning with Semantic Augmentation

被引:0
|
作者
Tong, Bin [1 ]
Klinkigt, Martin [1 ]
Chen, Junwen [1 ]
Cui, Xiankun [1 ]
Kong, Quan [1 ]
Murakami, Tomokazu [1 ]
Kobayashi, Yoshiyuki [1 ]
机构
[1] Hitachi, R&D Grp, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In situations in which labels are expensive or difficult to obtain, deep neural networks for object recognition often suffer to achieve fair performance. Zero-shot learning is dedicated to this problem. It aims to recognize objects of unseen classes by transferring knowledge from seen classes via a shared intermediate representation. Using the manifold structure of seen training samples is widely regarded as important to learn a robust mapping between samples and the intermediate representation, which is crucial for transferring the knowledge. However, their irregular structures, such as the lack in variation of samples for certain classes and highly overlapping clusters of different classes, may result in an inappropriate mapping. Additionally, in a high dimensional mapping space, the hubness problem may arise, in which one of the unseen classes has a high possibility to be assigned to samples of different classes. To mitigate such problems, we use a generative adversarial network to synthesize samples with specified semantics to cover a higher diversity of given classes and interpolated semantics of pairs of classes. We propose a simple yet effective method for applying the augmented semantics to the hinge loss functions to learn a robust mapping. The proposed method was extensively evaluated on small-and large-scale datasets, showing a significant improvement over state-of-the-art methods.
引用
收藏
页码:2476 / 2483
页数:8
相关论文
共 50 条
  • [1] Learning adversarial semantic embeddings for zero-shot recognition in open worlds
    Li, Tianqi
    Pang, Guansong
    Bai, Xiao
    Zheng, Jin
    Zhou, Lei
    Ning, Xin
    [J]. PATTERN RECOGNITION, 2024, 149
  • [2] Semantic Autoencoder for Zero-Shot Learning
    Kodirov, Elyor
    Xiang, Tao
    Gong, Shaogang
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4447 - 4456
  • [3] Learning semantic ambiguities for zero-shot learning
    Hanouti, Celina
    Le Borgne, Herve
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (26) : 40745 - 40759
  • [4] Learning semantic ambiguities for zero-shot learning
    Celina Hanouti
    Hervé Le Borgne
    [J]. Multimedia Tools and Applications, 2023, 82 : 40745 - 40759
  • [5] SEMANTIC AUGMENTATION HASHING FOR ZERO-SHOT IMAGE RETRIEVAL
    Zhong, Fangming
    Chen, Zhikui
    Min, Geyong
    Xia, Feng
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1943 - 1947
  • [6] Adversarial strategy for transductive zero-shot learning
    Liu, Youfa
    Du, Bo
    Ni, Fuchuan
    [J]. INFORMATION SCIENCES, 2021, 578 : 750 - 761
  • [7] Zero-Shot Learning by Harnessing Adversarial Samples
    Chen, Zhi
    Zhang, Pengfei
    Li, Jingjing
    Wang, Sen
    Huang, Zi
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4138 - 4146
  • [8] Preserving Semantic Relations for Zero-Shot Learning
    Annadani, Yashas
    Biswas, Soma
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7603 - 7612
  • [9] SR-GAN: SEMANTIC RECTIFYING GENERATIVE ADVERSARIAL NETWORK FOR ZERO-SHOT LEARNING
    Ye, Zihan
    Lyu, Fan
    Li, Linyan
    Fu, Qiming
    Ren, Jinchang
    Hu, Fuyuan
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 85 - 90
  • [10] Semantic softmax loss for zero-shot learning
    Ji, Zhong
    Sun, Yuxin
    Yu, Yunlong
    Guo, Jichang
    Pang, Yanwei
    [J]. NEUROCOMPUTING, 2018, 316 : 369 - 375