Marginal Contrastive Correspondence for Guided Image Generation

被引:17
|
作者
Zhan, Fangneng [1 ]
Yu, Yingchen [2 ]
Wu, Rongliang [2 ]
Zhang, Jiahui [2 ]
Lu, Shijian [2 ]
Zhang, Changgong [3 ]
机构
[1] Nanyang Technol Univ, S Lab, Singapore, Singapore
[2] Nanyang Technol Univ, Singapore, Singapore
[3] Amazon, Seattle, WA USA
关键词
D O I
10.1109/CVPR52688.2022.01040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Exemplar-based image translation establishes dense correspondences between a conditional input and an exemplar (from two different domains) for leveraging detailed exemplar styles to achieve realistic image translation. Existing work builds the cross-domain correspondences implicitly by minimizing feature-wise distances across the two domains. Without explicit exploitation of domain-invariant features, this approach may not reduce the domain gap effectively which often leads to sub-optimal correspondences and image translation. We design a Marginal Contrastive Learning Network (MCL-Net) that explores contrastive learning to learn domain-invariant features for realistic exemplar-based image translation. Specifically, we design an innovative marginal contrastive loss that guides to establish dense correspondences explicitly. Nevertheless, building correspondence with domain-invariant semantics alone may impair the texture patterns and lead to degraded texture generation. We thus design a Self-Correlation Map (SCM) that incorporates scene structures as auxiliary information which improves the built correspondences substantially. Quantitative and qualitative experiments on multifarious image translation tasks show that the proposed method outperforms the state-of-the-art consistently.
引用
收藏
页码:10653 / 10662
页数:10
相关论文
共 50 条
  • [41] IgSEG: Image-guided Story Ending Generation
    Huang, Qingbao
    Huang, Chuan
    Mo, Linzhang
    Wei, Jielong
    Cai, Yi
    Leung, Ho-fung
    Li, Qing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3114 - 3123
  • [42] Foreground Feature-Guided Camouflage Image Generation
    Chen, Yuelin
    An, Yuefan
    Huang, Yonsen
    Cai, Xiaodong
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2025, 16 (01) : 405 - 411
  • [43] Segmentation mask-guided person image generation
    Liu, Meichen
    Yan, Xin
    Wang, Chenhui
    Wang, Kejun
    APPLIED INTELLIGENCE, 2021, 51 (02) : 1161 - 1176
  • [44] Perceptual metric-guided human image generation
    Wu, Haoran
    He, Fazhi
    Duan, Yansong
    Yan, Xiaohu
    INTEGRATED COMPUTER-AIDED ENGINEERING, 2022, 29 (02) : 141 - 151
  • [45] Filter-Guided Diffusion for Controllable Image Generation
    Gu, Zeqi
    Yang, Ethan
    Davis, Abe
    PROCEEDINGS OF SIGGRAPH 2024 CONFERENCE PAPERS, 2024,
  • [46] Correction to: Learning Contrastive Representation for Semantic Correspondence
    Taihong Xiao
    Sifei Liu
    Shalini De Mello
    Zhiding Yu
    Jan Kautz
    Ming-Hsuan Yang
    International Journal of Computer Vision, 2022, 130 : 1607 - 1607
  • [47] Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer
    Yang, Serin
    Hwang, Hyunmin
    Ye, Jong Chul
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 22816 - 22825
  • [48] Pseudo-label Guided Contrastive Learning for Semi-supervised Medical Image Segmentation
    Basak, Hritam
    Yin, Zhaozheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19786 - 19797
  • [49] Efficient Token-Guided Image-Text Retrieval With Consistent Multimodal Contrastive Training
    Liu, Chong
    Zhang, Yuqi
    Wang, Hongsong
    Chen, Weihua
    Wang, Fan
    Huang, Yan
    Shen, Yi-Dong
    Wang, Liang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3622 - 3633
  • [50] Physical-prior-guided single image dehazing network via unpaired contrastive learning
    Wu, Mawei
    Jiang, Aiwen
    Chen, Hourong
    Ye, Jihua
    MULTIMEDIA SYSTEMS, 2024, 30 (05)