Asymmetric slack contrastive learning for full use of feature information in image translation

被引:0
|
作者
Zhang, Yusen [1 ]
Li, Min [1 ]
Gou, Yao [1 ]
He, Yujie [1 ]
机构
[1] Xian Res Inst Hitech, Xian 710025, Shaanxi, Peoples R China
关键词
Image translation; Cross-domain learning; Asymmetric slack contrast; Contrastive learning; Structure consistency;
D O I
10.1016/j.knosys.2024.112136
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, contrastive learning has been proven to be powerful in cross -domain feature learning and has been widely used in image translation tasks. However, these methods often overlook the differences between positive and negative samples regarding model optimization ability and treat them equally. This weakens the feature representation ability of the generative models. In this paper, we propose a novel image translation model based on asymmetric slack contrastive learning. We design a new contrastive loss asymmetrically by introducing a slack adjustment factor. Theoretical analysis shows that it can adaptively optimize and adjust according to different positive and negative samples and significantly improve optimization efficiency. In addition, to better preserve local structural relationships during image translation, we constructed a regional differential structural consistency correction block using differential vectors. Comparative experiments were conducted using seven existing methods on five datasets. The results indicate that our method can maintain structural consistency between cross -domain images at a deeper level. Furthermore, it is more effective in establishing real image -domain mapping relations, resulting in higher -quality images being generated.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Multi-feature contrastive learning for unpaired image-to-image translation
    Gou, Yao
    Li, Min
    Song, Yu
    He, Yujie
    Wang, Litao
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (04) : 4111 - 4122
  • [2] Multi-feature contrastive learning for unpaired image-to-image translation
    Yao Gou
    Min Li
    Yu Song
    Yujie He
    Litao Wang
    Complex & Intelligent Systems, 2023, 9 : 4111 - 4122
  • [3] Contrastive learning for unsupervised image-to-image translation
    Lee, Hanbit
    Seol, Jinseok
    Lee, Sang-goo
    Park, Jaehui
    Shim, Junho
    APPLIED SOFT COMPUTING, 2024, 151
  • [4] CONTRASTIVE TRANSLATION LEARNING FOR MEDICAL IMAGE SEGMENTATION
    Zeng, Wankang
    Fan, Wenkang
    Shen, Dongfang
    Chen, Yinran
    Luo, Xiongbiao
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2395 - 2399
  • [5] Dual Contrastive Learning for Unsupervised Image-to-Image Translation
    Han, Junlin
    Shoeiby, Mehrdad
    Petersson, Lars
    Armin, Mohammad Ali
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 746 - 755
  • [6] Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation
    Lin, Yupei
    Zhang, Sen
    Chen, Tianshui
    Lu, Yongyi
    Li, Guangping
    Shi, Yukai
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 1186 - 1194
  • [7] Feature-level contrastive learning for full-reference light field image quality assessment
    Lin, Lili
    Qu, Mengjia
    Bai, Siyu
    Wang, Luyao
    Wei, Xuehui
    Zhou, Wenhui
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2024, 361 (14):
  • [8] Truly Unsupervised Image-to-Image Translation with Contrastive Representation Learning
    Hong, Zhiwei
    Feng, Jianxing
    Jiang, Tao
    COMPUTER VISION - ACCV 2022, PT III, 2023, 13843 : 239 - 255
  • [9] Patch-Wise Graph Contrastive Learning for Image Translation
    Jung, Chanyong
    Kwon, Gihyun
    Ye, Jong Chul
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13013 - 13021
  • [10] Background-focused contrastive learning for unpaired image-to-image translation
    Shao, Mingwen
    Han, Minggui
    Meng, Lingzhuang
    Liu, Fukang
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (04)