Temporally Consistent Enhancement of Low-Light Videos via Spatial-Temporal Compatible Learning

被引:0
|
作者
Zhu, Lingyu [1 ]
Yang, Wenhan [2 ]
Chen, Baoliang [1 ]
Zhu, Hanwei [1 ]
Meng, Xiandong [2 ]
Wang, Shiqi [1 ,3 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Kowloon, Hong Kong, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] City Univ Hong Kong, Shenzhen Res Inst, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-light video enhancement; Temporal consistency; Spatial-temporal compatible learning; QUALITY ASSESSMENT; IMAGE; FRAMEWORK; RETINEX;
D O I
10.1007/s11263-024-02084-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal inconsistency is the annoying artifact that has been commonly introduced in low-light video enhancement, but current methods tend to overlook the significance of utilizing both data-centric clues and model-centric design to tackle this problem. In this context, our work makes a comprehensive exploration from the following three aspects. First, to enrich the scene diversity and motion flexibility, we construct a synthetic diverse low/normal-light paired video dataset with a carefully designed low-light simulation strategy, which can effectively complement existing real captured datasets. Second, for better temporal dependency utilization, we develop a Temporally Consistent Enhancer Network (TCE-Net) that consists of stacked 3D convolutions and 2D convolutions to exploit spatial-temporal clues in videos. Last, the temporal dynamic feature dependencies are exploited to obtain consistency constraints for different frame indexes. All these efforts are powered by a Spatial-Temporal Compatible Learning (STCL) optimization technique, which dynamically constructs specific training loss functions adaptively on different datasets. As such, multiple-frame information can be effectively utilized and different levels of information from the network can be feasibly integrated, thus expanding the synergies on different kinds of data and offering visually better results in terms of illumination distribution, color consistency, texture details, and temporal coherence. Extensive experimental results on various real-world low-light video datasets clearly demonstrate the proposed method achieves superior performance to state-of-the-art methods. Our code and synthesized low-light video database will be publicly available at https://github.com/lingyzhu0101/low-light-video-enhancement.git.
引用
收藏
页码:4703 / 4723
页数:21
相关论文
共 50 条
  • [1] Unsupervised Low-Light Video Enhancement With Spatial-Temporal Co-Attention Transformer
    Lv, Xiaoqian
    Zhang, Shengping
    Wang, Chenyang
    Zhang, Weigang
    Yao, Hongxun
    Huang, Qingming
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 4701 - 4715
  • [2] Low-Light Image Enhancement via Unsupervised Learning
    He, Wenchao
    Liu, Yutao
    [J]. ARTIFICIAL INTELLIGENCE, CICAI 2023, PT I, 2024, 14473 : 232 - 243
  • [3] Spatial-temporal low-rank prior for low-light volumetric fluorescence imaging
    He, Jijun
    Cai, Yeyi
    Wu, Jiamin
    Dai, Qionghai
    [J]. OPTICS EXPRESS, 2021, 29 (25): : 40721 - 40733
  • [4] Temporal-Spatial Filtering for Enhancement of Low-Light Surveillance Video
    Guo, Fan
    Tang, Jin
    Peng, Hui
    Zou, Beiji
    [J]. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2016, 20 (04) : 652 - 661
  • [5] Learning shrinkage fields for low-light image enhancement via Retinex
    Wu, Qingbo
    Wang, Rui
    Ren, Wenqi
    [J]. Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2020, 46 (09): : 1711 - 1720
  • [6] UHD Low-light image enhancement via interpretable bilateral learning
    Lin, Qiaowanni
    Zheng, Zhuoran
    Jia, Xiuyi
    [J]. INFORMATION SCIENCES, 2022, 608 : 1401 - 1415
  • [7] Learning to Restore Low-Light Images via Decomposition-and-Enhancement
    Xu, Ke
    Yang, Xin
    Yin, Baocai
    Lau, Rynson W. H.
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2278 - 2287
  • [8] Low-Light Image Enhancement by Learning Contrastive Representations in Spatial and Frequency Domains
    Huang, Yi
    Tu, Xiaoguang
    Fu, Gui
    Liu, Tingting
    Liu, Bokai
    Yang, Ming
    Feng, Ziliang
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1307 - 1312
  • [9] Deep Color Consistent Network for Low-Light Image Enhancement
    Zhang, Zhao
    Zheng, Huan
    Hong, Richang
    Xu, Mingliang
    Yan, Shuicheng
    Wang, Meng
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1889 - 1898
  • [10] Low-light image enhancement via coupled dictionary learning and extreme learning machine
    Zhang, Jie
    Zhou, Pucheng
    Xue, Mogen
    [J]. 2018 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2018, 10836