Marginal Contrastive Correspondence for Guided Image Generation

被引:17
|
作者
Zhan, Fangneng [1 ]
Yu, Yingchen [2 ]
Wu, Rongliang [2 ]
Zhang, Jiahui [2 ]
Lu, Shijian [2 ]
Zhang, Changgong [3 ]
机构
[1] Nanyang Technol Univ, S Lab, Singapore, Singapore
[2] Nanyang Technol Univ, Singapore, Singapore
[3] Amazon, Seattle, WA USA
关键词
D O I
10.1109/CVPR52688.2022.01040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Exemplar-based image translation establishes dense correspondences between a conditional input and an exemplar (from two different domains) for leveraging detailed exemplar styles to achieve realistic image translation. Existing work builds the cross-domain correspondences implicitly by minimizing feature-wise distances across the two domains. Without explicit exploitation of domain-invariant features, this approach may not reduce the domain gap effectively which often leads to sub-optimal correspondences and image translation. We design a Marginal Contrastive Learning Network (MCL-Net) that explores contrastive learning to learn domain-invariant features for realistic exemplar-based image translation. Specifically, we design an innovative marginal contrastive loss that guides to establish dense correspondences explicitly. Nevertheless, building correspondence with domain-invariant semantics alone may impair the texture patterns and lead to degraded texture generation. We thus design a Self-Correlation Map (SCM) that incorporates scene structures as auxiliary information which improves the built correspondences substantially. Quantitative and qualitative experiments on multifarious image translation tasks show that the proposed method outperforms the state-of-the-art consistently.
引用
收藏
页码:10653 / 10662
页数:10
相关论文
共 50 条
  • [31] Dynamic contrastive learning guided by class confidence and confusion degree for medical image segmentation?
    Chen, Jingkun
    Chen, Changrui
    Huang, Wenjian
    Zhang, Jianguo
    Debattista, Kurt
    Han, Jungong
    PATTERN RECOGNITION, 2024, 145
  • [32] Enhancing Hyperspectral Image Classification: Leveraging Unsupervised Information With Guided Group Contrastive Learning
    Li, Ben
    Fang, Leyuan
    Chen, Ning
    Kang, Jitong
    Yue, Jun
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 17
  • [33] Edge Guided GANs With Multi-Scale Contrastive Learning for Semantic Image Synthesis
    Tang, Hao
    Sun, Guolei
    Sebe, Nicu
    Van Gool, Luc
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) : 14435 - 14452
  • [34] Entropy-guided contrastive learning for semi-supervised medical image segmentation
    Xie, Junsong
    Wu, Qian
    Zhu, Renju
    IET IMAGE PROCESSING, 2024, 18 (02) : 312 - 326
  • [35] RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval
    Wang, Xiyue
    Du, Yuexi
    Yang, Sen
    Zhang, Jun
    Wang, Minghui
    Zhang, Jing
    Yang, Wei
    Huang, Junzhou
    Han, Xiao
    MEDICAL IMAGE ANALYSIS, 2023, 83
  • [36] Contrastive rhetoric: Issues in technical and professional correspondence
    Eliot, M
    Kasonic, K
    IPCC 2001: IEEE INTERNATIONAL PROFESSIONAL COMMUNICATION CONFERENCE, PROCEEDINGS: COMMUNICATION DIMENSIONS, 2001, : 265 - 273
  • [37] Adaptive Multitype Contrastive Views Generation for Remote Sensing Image Semantic Segmentation
    Shi, Cheng
    Han, Peiwen
    Zhao, Minghua
    Fang, Li
    Miao, Qiguang
    Pun, Chi-Man
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [38] Loss functions for pose guided person image generation
    Shi, Haoyue
    Le Wang
    Zheng, Nanning
    Hua, Gang
    Tang, Wei
    PATTERN RECOGNITION, 2022, 122
  • [39] Person Image Generation Guided by Posture, Expression and Illumination
    Wang, Kai
    Lu, Hongming
    Guo, Jing
    Ke, Yongzhen
    Yang, Shuai
    Tian, Kai
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS, 2024, 23 (03)
  • [40] Segmentation mask-guided person image generation
    Meichen Liu
    Xin Yan
    Chenhui Wang
    Kejun Wang
    Applied Intelligence, 2021, 51 : 1161 - 1176