Video quality enhancement based on visual attention model and multi-level exposure correction

被引:0
|
作者
Guo-Shiang Lin
Xian-Wei Ji
机构
[1] Da-Yeh University,Department of Computer Science and Information Engineering
来源
关键词
Visual attention model; Exposure correction; Image fusion;
D O I
暂无
中图分类号
学科分类号
摘要
Due to unfavorable environmental conditions such as lack of lighting, poor visual quality in images and videos may make intelligent image/video systems unstable. This means that visual quality enhancement plays an important role in image/video processing, computer vision, and pattern recognition. In this paper, we propose a video quality enhancement scheme based on visual attention model and multi-level exposure correction. To this end, the proposed scheme is composed of four parts: pre-processing, visual attention model generation, multi-level exposure correction, and temporal filtering. To extract more visual cues for visual attention model generation, a pre-processing is used to modify each frame. After preprocessing, facial and non-facial cues are measured to generate visual attention maps of each frame. On the basis of visual attention maps, a multi-level exposure correction algorithm is utilized to adjust the exposure level of each frame and then create several intermediate results. After fusing intermediate results, a synthesized image with good visual quality can be obtained. To avoid flicker effect, a temporal filter is exploited to make the variance of the exposure level small in the temporal domain. To evaluate the performance of the proposed scheme, some images/videos captured by mobile phones and digital cameras are tested. The experimental results show that the proposed scheme can effectively deal with the images/videos with low and high exposure levels. The results also demonstrate that the proposed scheme outperforms some existing methods in terms of visual quality.
引用
收藏
页码:9903 / 9925
页数:22
相关论文
共 50 条
  • [31] Power Quality Enhancement in High Power Multi-Level Drives
    Abolhassani, Mehdi
    Keister, Thomas
    [J]. 2010 IEEE ENERGY CONVERSION CONGRESS AND EXPOSITION, 2010, : 4299 - 4304
  • [32] A new joint CTC-attention-based speech recognition model with multi-level multi-head attention
    Qin, Chu-Xiong
    Zhang, Wen-Lin
    Qu, Dan
    [J]. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2019, 2019 (01)
  • [33] A new joint CTC-attention-based speech recognition model with multi-level multi-head attention
    Chu-Xiong Qin
    Wen-Lin Zhang
    Dan Qu
    [J]. EURASIP Journal on Audio, Speech, and Music Processing, 2019
  • [34] Multi-level image enhancement based on fuzzy inference
    Cheng D.
    Liu X.
    Jin Y.
    Tang X.
    Liu J.
    Cui S.
    [J]. Gaojishu Tongxin/Chinese High Technology Letters, 2011, 21 (03): : 273 - 278
  • [35] A Multi-Level Attention Model for Remote Sensing Image Captions
    Li, Yangyang
    Fang, Shuangkang
    Jiao, Licheng
    Liu, Ruijiao
    Shang, Ronghua
    [J]. REMOTE SENSING, 2020, 12 (06)
  • [36] Multi-level Stereo Attention Model for Center Channel Extraction
    Lim, Wootaek
    Beack, Seungkwon
    Lee, Taejin
    [J]. 2019 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 2019,
  • [37] Multi-level attention model for person re-identification
    Yan, Yichao
    Ni, Bingbing
    Liu, Jinxian
    Yang, Xiaokang
    [J]. PATTERN RECOGNITION LETTERS, 2019, 127 : 156 - 164
  • [38] Multi-Level Contextual RNNs With Attention Model for Scene Labeling
    Fan, Heng
    Mei, Xue
    Prokhorov, Danil
    Ling, Haibin
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 19 (11) : 3475 - 3485
  • [39] Next Basket Recommendation Model Based on Attribute-Aware Multi-Level Attention
    Liu, Tong
    Yin, Xianrui
    Ni, Weijian
    [J]. IEEE ACCESS, 2020, 8 : 153872 - 153880
  • [40] LOW-LIGHT IMAGE ENHANCEMENT WITH ATTENTION AND MULTI-LEVEL FEATURE FUSION
    Wang, Lei
    Fu, Guangtao
    Jiang, Zhuqing
    Ju, Guodong
    Men, Aidong
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 276 - 281