Dual Residual Attention Network for ICMOS Sensing Image

被引:2
|
作者
Wang Xia [1 ,2 ]
Zhang Xin [2 ]
Jiao Gangcheng [1 ]
Yang Ye [1 ]
Cheng Hongchang [1 ]
Yan Bo [1 ]
机构
[1] Sci & Technol Low Light Level Night Vis Lab, Xian 710065, Peoples R China
[2] Beijing Inst Technol, Sch Opt & Photon, Key Lab Optoelect Imaging Technol & Syst, Minist Educ, Beijing 100081, Peoples R China
关键词
Low-light-level night vision; ICMOS sensing image; Image denoising; Residual learning; Attention module; SPARSE;
D O I
10.3788/gzxb20225106.0610002
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Low-light-level night vision technology is to explore the photoelectric technology that how to enhance, transmit, store, reproduce and apply the images captured under low light conditions. It is an important part of modern optoelectronic technology. ICCD/ICMOS (Intensified CCD/CMOS) is a solid low-light imaging device with a wide range of applications and the lowest working illuminance which is formed by coupling an image intensifier and CCD/CMOS. Although ICMOS can image under low-light night vision conditions, the image intensifier also amplifies the intensity of the noise while enhancing the signal, resulting in obvious random noise in the captured image, and the noise characteristics are more complex than that of traditional CMOS imaging. Due to the microchannel plates, ICMOS sensing image noise is not independent and identically distributed, but aggregated random noise with spatial correlation. Aggregated noise destroys the original structural features of the image, which also greatly increases the difficulty of denoising. In this paper, we propose a dual residual attention network for ICMOS sensing image denoising. There are three main ideas for our method. First, the network adopts the idea of residual learning, which means that the output of the network is the noise image, not the denoised image. Then the denoised image is achieved by subtracting the noise image from the original image. The residual learning network only needs to extract the noise component from the original image, which greatly reduces the difficulty of training the network. Secondly, we introduce four residual attention modules in our model, and the number of feature maps of each module is constantly decreasing. Each residual attention module consists of four residual blocks, one channel attention layer and one convolutional layer. The basic unit of the module is the residual block, which can effectively improve the network performance. At the same time, the introduction of the residual module can better solve the problems of gradient dispersion, gradient explosion and gradient degradation. Finally, the network introduces the channel attention layer, which can assign different weights to the output feature map of the middle layer, thereby analyzing the importance of each feature channel, and then enhancing the useful features and suppressing slight features according to this importance, and finally guide the network to continuously reduce the dimension of the feature map. Existing deep learning denoising methods mostly work for simulated Gauss-Poisson distributed noise and real noise data of some natural images. These methods can not be directly applied to ICMOS sensing images. Due to the particularity of ICMOS imaging noise, we made the ICMOS image dataset ourselves. We adopt the multi-frame averaging method to obtain the label image The image sequence is captured from a static scene under a certain fixed illumination in the dark room, and then one label clean image of the image sequence is synthesized by a multi-frame weighted average method. The scene illuminance is accurately measured with an illuminance meter. This dataset is mainly based on three different illuminances 2 x 10(-1), 3 x 10(-2), 2 x 10(-3) lx for image acquisition, and seven different static scenes are collected under each illuminance condition. Due to the inconsistency of noise intensity and brightness, we conduct model training for images under different illuminances. Two static scenes with 1 000 images are used as training sets under each illuminance. Our method applied the L1 loss as the loss function. From the subjective and objective results, it can be seen that our method has better denoising results and higher efficiency than other state-of-art methods.
引用
收藏
页数:10
相关论文
共 24 条
  • [1] RENOIR - A dataset for real low-light image noise reduction
    Anaya, Josue
    Barbu, Adrian
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 51 : 144 - 154
  • [2] A non-local algorithm for image denoising
    Buades, A
    Coll, B
    Morel, JM
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 2, PROCEEDINGS, 2005, : 60 - 65
  • [3] Burger HC, 2012, PROC CVPR IEEE, P2392, DOI 10.1109/CVPR.2012.6247952
  • [4] NBNet: Noise Basis Learning for Image Denoising with Subspace Projection
    Cheng, Shen
    Wang, Yuzhi
    Huang, Haibin
    Liu, Donghao
    Fan, Haoqiang
    Liu, Shuaicheng
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 4894 - 4904
  • [5] Image denoising by sparse 3-D transform-domain collaborative filtering
    Dabov, Kostadin
    Foi, Alessandro
    Katkovnik, Vladimir
    Egiazarian, Karen
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2007, 16 (08) : 2080 - 2095
  • [6] Nonlocally Centralized Sparse Representation for Image Restoration
    Dong, Weisheng
    Zhang, Lei
    Shi, Guangming
    Li, Xin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (04) : 1618 - 1628
  • [7] Image denoising via sparse and redundant representations over learned dictionaries
    Elad, Michael
    Aharon, Michal
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (12) : 3736 - 3745
  • [8] GAO Tianyang, INFRARED TECHNOLOGY, V43, P537
  • [9] Toward Convolutional Blind Denoising of Real Photographs
    Guo, Shi
    Yan, Zifei
    Zhang, Kai
    Zuo, Wangmeng
    Zhang, Lei
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1712 - 1722
  • [10] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778