Enhancing Underwater Video from Consecutive Frames While Preserving Temporal Consistency

被引:0
|
作者
Hu, Kai [1 ,2 ]
Meng, Yuancheng [1 ]
Liao, Zichen [1 ,3 ]
Tang, Lei [4 ]
Ye, Xiaoling [1 ,2 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Automat, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Jiangsu Collaborat Innovat Ctr Atmospher Environm, Nanjing 210044, Peoples R China
[3] Univ Reading, Whiteknights, POB 217, Reading RG6 6AH, England
[4] State Grid Jiangsu Elect Power Co, Informat & Telecommun Branch, Nanjing 211125, Peoples R China
基金
中国国家自然科学基金;
关键词
underwater video enhancement; underwater image enhancement; optical flow prediction; temporal consistency; IMAGE-ENHANCEMENT; CONTRAST; NETWORK;
D O I
10.3390/jmse13010127
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
Current methods for underwater image enhancement primarily focus on single-frame processing. While these approaches achieve impressive results for static images, they often fail to maintain temporal coherence across frames in underwater videos, which leads to temporal artifacts and frame flickering. Furthermore, existing enhancement methods struggle to accurately capture features in underwater scenes. This makes it difficult to handle challenges such as uneven lighting and edge blurring in complex underwater environments. To address these issues, this paper presents a dual-branch underwater video enhancement network. The network synthesizes short-range video sequences by learning and inferring optical flow from individual frames. It effectively enhances temporal consistency across video frames through predicted optical flow information, thereby mitigating temporal instability within frame sequences. In addition, to address the limitations of traditional U-Net models in handling complex multiscale feature fusion, this study proposes a novel underwater feature fusion module. By applying both max pooling and average pooling, this module separately extracts local and global features. It utilizes an attention mechanism to adaptively adjust the weights of different regions in the feature map, thereby effectively enhancing key regions within underwater video frames. Experimental results indicate that when compared with the existing underwater image enhancement baseline method and the consistency enhancement baseline method, the proposed model improves the consistency index by 30% and shows a marginal decrease of only 0.6% in enhancement quality index, demonstrating its superiority in underwater video enhancement tasks.
引用
收藏
页数:27
相关论文
共 18 条
  • [1] Video halftoning preserving temporal consistency
    Hsu, Chao-Yong
    Lu, Chun-Shien
    Pei, Soo-Chang
    2007 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-5, 2007, : 1938 - +
  • [2] Preserving Semantic and Temporal Consistency for Unpaired Video-to-Video Translation
    Park, Kwanyong
    Woo, Sanghyun
    Kim, Dahun
    Cho, Donghyeon
    Kweon, In So
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1248 - 1257
  • [3] Preserving the Temporal Consistency of Video Sequences for Surgical Instruments Segmentation
    Li, Yaoqian
    Li, Caizi
    Si, Weixin
    PROCEEDINGS OF 2021 3RD INTERNATIONAL CONFERENCE ON INTELLIGENT MEDICINE AND IMAGE PROCESSING (IMIP 2021), 2021, : 78 - 82
  • [4] Interactive Control over Temporal Consistency while Stylizing Video Streams
    Shekhar, Sumit
    Reimann, Max
    Hilscher, Moritz
    Semmo, Amir
    Doellner, Juergen
    Trapp, Matthias
    COMPUTER GRAPHICS FORUM, 2023, 42 (04)
  • [5] Preserving Global and Local Temporal Consistency for Arbitrary Video Style Transfer
    Wu, Xinxiao
    Chen, Jialu
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1791 - 1799
  • [6] Temporal Consistency Learning of Inter-Frames for Video Super-Resolution
    Liu, Meiqin
    Jin, Shuo
    Yao, Chao
    Lin, Chunyu
    Zhao, Yao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (04) : 1507 - 1520
  • [7] Enhancing Randomization Entropy of x86-64 Code while Preserving Semantic Consistency
    Feng Xuewei
    Wang Dongxia
    Lin Zhechao
    Kuang Xiaohui
    Zhao Gang
    2020 IEEE 19TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2020), 2020, : 1 - 12
  • [8] Exemplar-Based Video Inpainting Approach Using Temporal Relationship of Consecutive Frames
    Hung, Kuo-Lung
    Lai, Shih-che
    2017 IEEE 8TH INTERNATIONAL CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY (ICAST), 2017, : 373 - 378
  • [9] Tracking Keypoints from Consecutive Video Frames Using CNN Features for Space Applications
    Borse, Janhavi H.
    Patil, Dipti D.
    Kumar, Vinod
    TEHNICKI GLASNIK-TECHNICAL JOURNAL, 2021, 15 (01): : 11 - 17
  • [10] Learning Temporal Consistency for Low Light Video Enhancement from Single Images
    Zhang, Fan
    Li, Yu
    You, Shaodi
    Fu, Ying
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 4965 - 4974