Global-Local Generation Adversarial Learning Based Low-Light Image Enhancement

被引:0
|
作者
Sun Z. [1 ]
Song H. [1 ]
Fan J. [2 ]
Liu Q. [3 ]
机构
[1] Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, Jiangsu Key Laboratory of Big Data Analysis Technology, Nanjing University of Information Science and Technology, Nanjing
[2] College of Computer Science, Nanjing University of Aeronautics and Astronautics, Nanjing
[3] College of Computer Science, Nanjing University of Information Science and Technology, Nanjing
关键词
frequency domain loss; generation adversarial network; low-light image enhancement; unsupervised learning;
D O I
10.3724/SP.J.1089.2022.19719
中图分类号
学科分类号
摘要
In the field of low-light image enhancement, the existing unsupervised methods still have some problems, such as the lack of authenticity and the unclear effect of image enhancement in extremely dark conditions. To address this issue, we propose a highly-effective unsupervised approach to directly recover normal light image from low-light image by designing an effective cyclic generation adversarial network, Firstly, in order to solve the problem of insufficient memory caused by large-size input, a global and local generator has been designed. Secondly, we develop an adaptive two-stage discriminator. Among it, by discriminating the whole image, the local region that needs to be discriminated again is obtained adaptively. Finally, the loss in frequency domain is employed to avoid image distortion. We employ the focal frequency loss which allows the model to adaptively focus on frequency components that are hard to synthesize. The PI index of the method in this paper reaches on NPE, LIME, MEF, DICM and VV datasets respectively 2.81, 2.78, 2.40, 3.15, 3.69. On LOL datasets, PSNR and SSIM index reached 19.89dB, 0.7823, with good robustness. © 2022 Institute of Computing Technology. All rights reserved.
引用
收藏
页码:1550 / 1558
页数:8
相关论文
共 33 条
  • [1] Jiang Y F, Gong X Y, Liu D, Et al., EnlightenGAN: deep light enhancement without paired supervision, IEEE Transactions on Image Processing, 30, pp. 2340-2349, (2021)
  • [2] Guo C L, Li C Y, Guo J C, Et al., Zero-reference deep curve estimation for low-light image enhancement, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1777-1786, (2020)
  • [3] Ibrahim H, Kong N S P., Brightness preserving dynamic histogram equalization for image contrast enhancement, IEEE Transactions on Consumer Electronics, 53, 4, pp. 1752-1758, (2007)
  • [4] Lore K G, Akintayo A, Sarkar S., LLNet: a deep autoencoder approach to natural low-light image enhancement, Pattern Recognition, 61, pp. 650-662, (2017)
  • [5] Gu Z H, Li F, Fang F M, Et al., A novel retinex-based fractional-order variational model for images with severely low light, IEEE Transactions on Image Processing, 29, pp. 3239-3253, (2019)
  • [6] Hao S J, Han X, Guo Y R, Et al., Low-light image enhancement with semi-decoupled decomposition, IEEE Transactions on Multimedia, 22, 12, pp. 3025-3038, (2020)
  • [7] Li C Y, Guo J C, Porikli F, Et al., LightenNet: a convolutional neural network for weakly illuminated image enhancement, Pattern Recognition Letters, 104, pp. 15-22, (2018)
  • [8] Ronneberger O, Fischer P, Brox T., U-Net: convolutional networks for biomedical image segmentation, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, (2015)
  • [9] Liu Z, Lin Y T, Cao Y, Et al., Swin transformer: hierarchical vision transformer using shifted windows, Proceeding of the IEEE/CVF International Conference on Computer Vision, pp. 9992-10002, (2021)
  • [10] Jiang L M, Dai B, Wu W, Et al., Focal frequency loss for image reconstruction and synthesis, Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13899-13909, (2021)