Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation

被引:5
|
作者
Lin, Yupei [1 ]
Zhang, Sen [2 ]
Chen, Tianshui [1 ]
Lu, Yongyi [1 ]
Li, Guangping [1 ]
Shi, Yukai [1 ]
机构
[1] Guangdong Univ Technol, Guangzhou, Peoples R China
[2] Univ Sydney, Sydney, NSW, Australia
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
contrastive learning; image-to-image translation; generative adversarial network;
D O I
10.1145/3503161.3547802
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Unpaired image-to-image translation aims to find a mapping between the source domain and the target domain. To alleviate the problem of the lack of supervised labels for the source images, cycle-consistency based methods have been proposed for image structure preservation by assuming a reversible relationship between unpaired images. However, this assumption only uses limited correspondence between image pairs. Recently, contrastive learning (CL) has been used to further investigate the image correspondence in unpaired image translation by using patch-based positive/negative learning. Patch-based contrastive routines obtain the positives by self-similarity computation and recognize the rest patches as negatives. This flexible learning paradigm obtains auxiliary contextualized information at a low cost. As the negatives own an impressive sample number, with curiosity, we make an investigation based on a question: are all negatives necessary for feature contrastive learning? Unlike previous CL approaches that use negatives as much as possible, in this paper, we study the negatives from an information-theoretic perspective and introduce a new negative Pruning technology for Unpaired image-to-image Translation (PUT) by sparsifying and ranking the patches. The proposed algorithm is efficient, flexible and enables the model to learn essential information between corresponding patches stably. By putting quality over quantity, only a few negative patches are required to achieve better results. Lastly, we validate the superiority, stability, and versatility of our model through comparative experiments.
引用
收藏
页码:1186 / 1194
页数:9
相关论文
共 50 条
  • [1] Multi-feature contrastive learning for unpaired image-to-image translation
    Yao Gou
    Min Li
    Yu Song
    Yujie He
    Litao Wang
    [J]. Complex & Intelligent Systems, 2023, 9 : 4111 - 4122
  • [2] Multi-feature contrastive learning for unpaired image-to-image translation
    Gou, Yao
    Li, Min
    Song, Yu
    He, Yujie
    Wang, Litao
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (04) : 4111 - 4122
  • [3] Background-focused contrastive learning for unpaired image-to-image translation
    China University of Petroleum , Qingdao Institute of Software, College of Computer Science and Technology, Qingdao, China
    [J]. J. Electron. Imaging, 2024, 4
  • [4] Multi-attention bidirectional contrastive learning method for unpaired image-to-image translation
    Yang, Benchen
    Liu, Xuzhao
    Li, Yize
    Jin, Haibo
    Qu, Yetian
    [J]. PLOS ONE, 2024, 19 (04):
  • [5] Contrastive learning for unsupervised image-to-image translation
    Lee, Hanbit
    Seol, Jinseok
    Lee, Sang-goo
    Park, Jaehui
    Shim, Junho
    [J]. APPLIED SOFT COMPUTING, 2024, 151
  • [6] Dual Contrastive Learning for Unsupervised Image-to-Image Translation
    Han, Junlin
    Shoeiby, Mehrdad
    Petersson, Lars
    Armin, Mohammad Ali
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 746 - 755
  • [7] Instance-wise Hard Negative Example Generation for Contrastive Learning in Unpaired Image-to-Image Translation
    Wang, Weilun
    Zhou, Wengang
    Bao, Jianmin
    Chen, Dong
    Li, Houqiang
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14000 - 14009
  • [8] Exploring Double Cross Cyclic Interpolation in Unpaired Image-to-Image Translation
    Lopez, Jorge
    Mauricio, Antoni
    Camara, Guillermo
    [J]. 2019 32ND SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 2019, : 124 - 130
  • [9] Random Reconstructed Unpaired Image-to-Image Translation
    Zhang, Xiaoqin
    Fan, Chenxiang
    Xiao, Zhiheng
    Zhao, Li
    Chen, Huiling
    Chang, Xiaojun
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (03) : 3144 - 3154
  • [10] Unpaired image-to-image translation of structural damage
    Varghese, Subin
    Hoskere, Vedhus
    [J]. ADVANCED ENGINEERING INFORMATICS, 2023, 56