From Zero-shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis

被引:80
|
作者
Long, Yang [1 ]
Liu, Li [2 ]
Shao, Ling [2 ]
Shen, Fumin [3 ]
Ding, Guiguang [4 ]
Han, Jungong [5 ]
机构
[1] Univ Sheffield, Dept Elect & Elect Engn, Sheffield, S Yorkshire, England
[2] Univ East Anglia, Sch Comp Sci, Norwich, Norfolk, England
[3] Univ Elect Sci & Technol China, Ctr Future Media, Chengdu, Sichuan, Peoples R China
[4] Tsinghua Univ, Sch Software, Beijing, Peoples R China
[5] Northumbria Univ, Dept Comp Sci & Digital Technol, Newcastle Upon Tyne, Tyne & Wear, England
关键词
D O I
10.1109/CVPR.2017.653
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data. Extensive experimental results suggest that our proposed approach significantly improve the state-of-the-art results.
引用
收藏
页码:6165 / 6174
页数:10
相关论文
共 50 条
  • [1] Learning unseen visual prototypes for zero-shot classification
    Li, Xiao
    Fang, Min
    Feng, Dazheng
    Li, Haikun
    Wu, Jinqiao
    [J]. KNOWLEDGE-BASED SYSTEMS, 2018, 160 : 176 - 187
  • [2] Adversarial unseen visual feature synthesis for Zero-shot Learning
    Zhang, Haofeng
    Long, Yang
    Liu, Li
    Shao, Ling
    [J]. NEUROCOMPUTING, 2019, 329 : 12 - 20
  • [3] Zero-shot classification with unseen prototype learning
    Zhong Ji
    Biying Cui
    Yunlong Yu
    Yanwei Pang
    Zhongfei Zhang
    [J]. Neural Computing and Applications, 2023, 35 : 12307 - 12317
  • [4] Zero-shot classification with unseen prototype learning
    Ji, Zhong
    Cui, Biying
    Yu, Yunlong
    Pang, Yanwei
    Zhang, Zhongfei
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (17): : 12307 - 12317
  • [5] Zero-Shot Learning Using Synthesised Unseen Visual Data with Diffusion Regularisation
    Long, Yang
    Liu, Li
    Shen, Fumin
    Shao, Ling
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (10) : 2498 - 2512
  • [6] Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning
    Changpinyo, Soravit
    Chao, Wei-Lun
    Sha, Fei
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3496 - 3505
  • [7] Heterogeneous Data Integration using Confidence Estimation of Unseen Visual Data for Zero-shot Learning
    Seo, Sanghyun
    Kim, Juntae
    [J]. PROCEEDINGS OF THE 2019 2ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND INFORMATION MANAGEMENT (ICSIM 2019) / 2019 2ND INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (ICBDSC 2019), 2019, : 171 - 174
  • [8] Visual Data Synthesis via GAN for Zero-Shot Video Classification
    Zhang, Chenrui
    Peng, Yuxin
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 1128 - 1134
  • [9] Learning domain invariant unseen features for generalized zero-shot classification
    Li, Xiao
    Fang, Min
    Li, Haikun
    Wu, Jinqiao
    [J]. KNOWLEDGE-BASED SYSTEMS, 2020, 206
  • [10] Distinguishing Unseen from Seen for Generalized Zero-shot Learning
    Su, Hongzu
    Li, Jingjing
    Chen, Zhi
    Zhu, Lei
    Lu, Ke
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7875 - 7884