Zero-shot unsupervised image-to-image translation via exploiting semantic attributes

被引:2
|
作者
Chen, Yuanqi [1 ,2 ]
Yu, Xiaoming [1 ,2 ]
Liu, Shan [3 ]
Gao, Wei [1 ,2 ]
Li, Ge [1 ]
机构
[1] Peking Univ, Sch Elect & Comp Engn, Shenzhen Grad Sch, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518055, Peoples R China
[3] Tencent Inc, Shenzhen 518000, Peoples R China
基金
国家重点研发计划;
关键词
Image -to-image translation; Image synthesis; Zero-shot learning; Generative adversarial networks; GENERATIVE ADVERSARIAL NETWORKS; GAN; CLASSIFICATION;
D O I
10.1016/j.imavis.2022.104489
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have shown remarkable success in unsupervised image-to-image translation. However, if there is no access to enough images in target classes, learning a mapping from source classes to the target classes always suffers from mode collapse, especially the zero shot case, which limits the application of the existing methods. In this work, we propose a zero-shot unsupervised image-to-image translation framework to address this limita-tion, by effectively associating categories with their side information like attributes. To generalize the translator to previously unseen classes, we introduce two strategies for exploiting the semantic attribute space. First, we propose to preserve semantic relations to the visual space for effective guidance on where to map the input image. Second, expanding attribute space is introduced by utilizing attribute vectors of unseen classes, which al-leviates the mapping bias for unseen classes. Both of these strategies encourage the translator to explore the modes of unseen classes. Quantitative and qualitative results on different datasets validate the effectiveness of our proposed approach. Moreover, we demonstrate that our framework can be applied to fashion design task. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Zero-shot unsupervised image-to-image translation via exploiting semantic attributes
    Chen, Yuanqi
    Yu, Xiaoming
    Liu, Shan
    Gao, Wei
    Li, Ge
    Image and Vision Computing, 2022, 124
  • [2] Zero-shot Image-to-Image Translation
    Parmar, Gaurav
    Singh, Krishna Kumar
    Zhang, Richard
    Li, Yijun
    Lu, Jingwan
    Zhu, Jun-Yan
    PROCEEDINGS OF SIGGRAPH 2023 CONFERENCE PAPERS, SIGGRAPH 2023, 2023,
  • [3] ZstGAN: An adversarial approach for Unsupervised Zero-Shot Image-to-image Translation
    Lin, Jianxin
    Xia, Yingce
    Liu, Sen
    Zhao, Shuxin
    Chen, Zhibo
    NEUROCOMPUTING, 2021, 461 : 327 - 335
  • [4] Few-Shot Unsupervised Image-to-Image Translation
    Liu, Ming-Yu
    Huang, Xun
    Mallya, Arun
    Karras, Tero
    Aila, Timo
    Lehtinen, Jaakko
    Kautz, Jan
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 10550 - 10559
  • [5] JurassicWorld Remake: Bringing Ancient Fossils Back to Life via Zero-Shot Long Image-to-Image Translation
    Martin, Alexander
    Zheng, Haitian
    An, Jie
    Luo, Jiebo
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9320 - 9328
  • [6] Unsupervised Domain Adaptation for the Semantic Segmentation of Remote Sensing Images via One-Shot Image-to-Image Translation
    Ismael, Sarmad F.
    Kayabol, Koray
    Aptoula, Erchan
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [7] Zero-shot image classification via Visual–Semantic Feature Decoupling
    Xin Sun
    Yu Tian
    Haojie Li
    Multimedia Systems, 2024, 30
  • [8] Multimodal Unsupervised Image-to-Image Translation
    Huang, Xun
    Liu, Ming-Yu
    Belongie, Serge
    Kautz, Jan
    COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 : 179 - 196
  • [9] Unsupervised Image-to-Image Translation: A Review
    Hoyez, Henri
    Schockaert, Cedric
    Rambach, Jason
    Mirbach, Bruno
    Stricker, Didier
    SENSORS, 2022, 22 (21)
  • [10] Unsupervised Image-to-Image Translation Networks
    Liu, Ming-Yu
    Breuel, Thomas
    Kautz, Jan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30