Iterative Prompt Learning for Unsupervised Backlit Image Enhancement

被引:12
|
作者
Liang, Zhexin [1 ]
Li, Chongyi [1 ]
Zhou, Shangchen [1 ]
Feng, Ruicheng [1 ]
Loy, Chen Change [1 ]
机构
[1] Nanyang Technol Univ, S Lab, Singapore, Singapore
关键词
D O I
10.1109/ICCV51070.2023.00743
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel unsupervised backlit image enhancement method, abbreviated as CLIP-LIT, by exploring the potential of Contrastive Language-Image Pre-Training (CLIP) for pixel-level image enhancement. We show that the open-world CLIP prior not only aids in distinguishing between backlit and well-lit images, but also in perceiving heterogeneous regions with different luminance, facilitating the optimization of the enhancement network. Unlike high- level and image manipulation tasks, directly applying CLIP to enhancement tasks is non-trivial, owing to the difficulty in finding accurate prompts. To solve this issue, we devise a prompt learning framework that first learns an initial prompt pair by constraining the text-image similarity between the prompt (negative/positive sample) and the corresponding image (backlit image/well-lit image) in the CLIP latent space. Then, we train the enhancement network based on the text-image similarity between the enhanced result and the initial prompt pair. To further improve the accuracy of the initial prompt pair, we iteratively fine-tune the prompt learning framework to reduce the distribution gaps between the backlit images, enhanced results, and well-lit images via rank learning, boosting the enhancement performance. Our method alternates between updating the prompt learning framework and enhancement network until visually pleasing results are achieved. Extensive experiments demonstrate that our method outperforms state-of-the-art methods in terms of visual quality and generalization ability, without requiring any paired data.
引用
收藏
页码:8060 / 8069
页数:10
相关论文
共 50 条
  • [1] Single Backlit Image Enhancement
    Trongtirakul, Thaweesak
    Chiracharit, Werapon
    Agaian, Sos S.
    [J]. IEEE ACCESS, 2020, 8 : 71940 - 71950
  • [2] Image retrieval using unsupervised prompt learning and regional attention
    Zhang, Bo-Jian
    Liu, Guang-Hai
    Li, Zuoyong
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 247
  • [3] Unsupervised Image Enhancement via Contrastive Learning
    Li, Di
    Rahardja, Susanto
    [J]. 2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [4] Backlit image enhancement based on foreground extraction
    Zhao, Minghua
    Cheng, Danni
    Wang, Li
    [J]. TWELFTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2020), 2021, 11720
  • [5] BacklitNet: A dataset and network for backlit image enhancement
    Lv, Xiaoqian
    Zhang, Shengping
    Liu, Qinglin
    Xie, Haozhe
    Zhong, Bineng
    Zhou, Huiyu
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 218
  • [6] HISTOGRAM SPECIFICATION-BASED IMAGE ENHANCEMENT FOR BACKLIT IMAGE
    Ueda, Yoshiaki
    Moriyarna, Daiki
    Koga, Takanori
    Suetake, Noriaki
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 958 - 962
  • [7] Exposure Correction and Local Enhancement for Backlit Image Restoration
    Dhara, Sobhan Kanti
    Sen, Debashis
    [J]. IMAGE AND VIDEO TECHNOLOGY (PSIVT 2019), 2019, 11854 : 170 - 183
  • [8] Colorectal endoscopic image enhancement via unsupervised deep learning
    Yue, Guanghui
    Gao, Jie
    Duan, Lvyin
    Du, Jingfeng
    Yan, Weiqing
    Wang, Shuigen
    Wang, Tianfu
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023,
  • [9] Unsupervised Learning for 2D Image Texture Enhancement
    Labinghisa, Boney
    Kim, Jeong-Su
    Lee, Dong Myung
    [J]. 12TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE (ICTC 2021): BEYOND THE PANDEMIC ERA WITH ICT CONVERGENCE INNOVATION, 2021, : 587 - 589
  • [10] Unsupervised Deep-Learning Approach for Underwater Image Enhancement
    Espinosa, Alejandro Rico
    McIntosh, Declan
    Albu, Alexandra Branzan
    [J]. ADVANCES IN VISUAL COMPUTING, ISVC 2023, PT II, 2023, 14362 : 233 - 244