RNVE: A Real Nighttime Vision Enhancement Benchmark and Dual-Stream Fusion Network

被引:0
|
作者
Wang, Yuehang [1 ,2 ]
Zhang, Yongji [1 ,2 ]
Guo, Qianren [3 ]
Zhao, Minghao [4 ]
Jiang, Yu [1 ,2 ]
机构
[1] Jilin Univ, Key Lab Symbol Computat & Knowledge Engn, Minist Educ, Changchun 130012, Peoples R China
[2] Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Peoples R China
[3] Jilin Univ, Coll Software, Changchun 130012, Peoples R China
[4] Jilin Univ, Coll Earth Sci, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Pipelines; Training; Semantics; Cameras; Signal processing algorithms; Visualization; Streaming media; Fusion network; image enhancement; low-light; nighttime vision; QUALITY ASSESSMENT; LIGHT ENHANCEMENT;
D O I
10.1109/LSP.2023.3343972
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Images captured under challenging low-light conditions often suffer from myriad issues, including diminished contrast and obscured details, stemming from factors such as constrained lighting conditions or pervasive noise interference. Existing learning-based methods struggle in extreme low-light scenarios due to a lack of diverse paired datasets. In this letter, we meticulously curate a challenging real nighttime vision enhancement dataset called RNVE. RNVE comprises diverse data from various devices, including cameras and smartphones, available in both RGB and RAW formats. To enhance data diversity and enable comprehensive algorithm validation, we integrate synthetically generated low-light data, showcasing a spectrum of low-light effects. Additionally, we propose a low-light vision enhancement pipeline based on a dual-stream fusion network, proficiently improving the reconstruction quality of real nighttime scenes and restoring their authentic colors and contrast. Numerous experiments consistently demonstrate that the proposed pipeline excels in low-light enhancement and exhibits robust generalization capabilities across different datasets.
引用
下载
收藏
页码:131 / 135
页数:5
相关论文
共 50 条
  • [21] Dual-stream stereo network for depth estimation
    Yangyang Zhong
    Tong Jia
    Kaiqi Xi
    Wenhao Li
    Dongyue Chen
    The Visual Computer, 2023, 39 : 5343 - 5357
  • [22] Dual-stream stereo network for depth estimation
    Zhong, Yangyang
    Jia, Tong
    Xi, Kaiqi
    Li, Wenhao
    Chen, Dongyue
    VISUAL COMPUTER, 2023, 39 (11): : 5343 - 5357
  • [23] Efficient Dual-Stream Fusion Network forReal-Time Railway Scene Understanding
    Cao, Zhiwei
    Gao, Yang
    Bai, Jie
    Qin, Yong
    Zheng, Yuanjin
    Jia, Limin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, : 1 - 11
  • [24] DeReFNet: Dual-stream Dense Residual Fusion Network for static hand gesture recognition
    Sahoo, Jaya Prakash
    Sahoo, Suraj Prakash
    Ari, Samit
    Patra, Sarat Kumar
    DISPLAYS, 2023, 296
  • [25] Dual-Stream Feature Fusion Network for Detection and ReID in Multi-object Tracking
    He, Qingyou
    Li, Liangqun
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2022, 13629 : 247 - 260
  • [26] Feature Fusion for Dual-Stream Cooperative Action Recognition
    Chen, Dong
    Wu, Mengtao
    Zhang, Tao
    Li, Chuanqi
    IEEE ACCESS, 2023, 11 : 116732 - 116740
  • [27] Antiocclusion Infrared Aerial Target Recognition With Vision-Inspired Dual-Stream Graph Network
    Yang, Xi
    Li, Shaoyi
    Zhang, Liang
    Yan, Binbin
    Meng, Zhongjie
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [28] Dual-Stream Recurrent Neural Network for Video Captioning
    Xu, Ning
    Liu, An-An
    Wong, Yongkang
    Zhang, Yongdong
    Nie, Weizhi
    Su, Yuting
    Kankanhalli, Mohan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (08) : 2482 - 2493
  • [29] Dual-Stream Attention Network for Hyperspectral Image Unmixing
    School of Computer Science and Technology, Ocean University of China, Qingdao
    266100, China
    不详
    266100, China
    arXiv,
  • [30] Dual-Stream Attention Network for Hyperspectral Image Unmixing
    Wang, Yufang
    Wu, Wenmin
    Qi, Lin
    Gao, Feng
    International Geoscience and Remote Sensing Symposium (IGARSS), 2024, : 9438 - 9441