Self-attentive Pyramid Network for Single Image De-raining

被引:0
|
作者
Guo, Taian [1 ]
Dai, Tao [1 ,2 ]
Li, Jiawei [1 ]
Xia, Shu-Tao [1 ,2 ]
机构
[1] Tsinghua Univ, Grad Sch Shenzhen, Shenzhen 518055, Guangdong, Peoples R China
[2] Peng Cheng Lab, PCL Res Ctr Networks & Commun, Shenzhen 518055, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Rain streak removal; Encoder-decoder network; Self-attention;
D O I
10.1007/978-3-030-36708-4_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Rain Streaks in a single image can severely damage the visual quality, and thus degrade the performance of current computer vision algorithms. To remove the rain streaks effectively, plenty of CNN-based methods have recently been developed, and obtained impressive performance. However, most existing CNN-based methods focus on network design, while rarely exploits spatial correlations of feature. In this paper, we propose a deep self-attentive pyramid network (SAPN) for more powerful feature expression for single image de-raining. Specifically, we propose a self-attentive pyramid module (SAM), which consists of convolutional layers enhanced by self-attention calculation units (SACUs) to capture the abstraction of image contents, and deconvolutional layers to upsample the feature maps and recover image details. Besides, we propose self-attention based skip connections to symmetrically link convolutional and deconvolutional layers to exploit spatial contextual information better. To model rain streaks with various scales and shapes, a multi-scale pooling (MSP) module is also introduced to efficiently leverage features from different scales. Extensive experiments on both synthetic and real-world datasets demonstrate the effectiveness of our proposed method in terms of both quantitative and visual quality.
引用
收藏
页码:390 / 401
页数:12
相关论文
共 50 条
  • [1] Pyramid fully residual network for single image de-raining
    Yao, Guangle
    Wang, Cong
    Wu, Yutong
    Wang, Yang
    [J]. NEUROCOMPUTING, 2021, 456 : 168 - 178
  • [2] Gradual Network for Single Image De-raining
    Yu, Weijiang
    Huang, Zhe
    Zhang, Wayne
    Feng, Litong
    Xiao, Nong
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1795 - 1804
  • [3] Single Image De-raining Based on a Novel Enhanced Attentive Generative Adversarial Network
    Zhou, Haochen
    Wei, Qin
    [J]. 5TH ANNUAL INTERNATIONAL CONFERENCE ON INFORMATION SYSTEM AND ARTIFICIAL INTELLIGENCE (ISAI2020), 2020, 1575
  • [4] A pyramid non-local enhanced residual dense network for single image de-raining
    Zhao, Minghua
    Fan, Hengrui
    DU, Shuangli
    Wang, Li
    Li, Peng
    Hu, Jing
    [J]. IET IMAGE PROCESSING, 2021, 15 (08) : 1786 - 1799
  • [5] Multi-Scale Weighted Fusion Attentive Generative Adversarial Network for Single Image De-Raining
    Bi, Xiaojun
    Xing, Junyao
    [J]. IEEE ACCESS, 2020, 8 : 69838 - 69848
  • [6] EAGNet: Elementwise Attentive Gating Network-Based Single Image De-Raining With Rain Simplification
    Ahn, Namhyun
    Jo, So Yeon
    Kang, Suk-Ju
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (02) : 608 - 620
  • [7] Recurrent Attention Dense Network for Single Image De-Raining
    Chai, Guoqiang
    Wang, Zhaoba
    Guo, Guodong
    Chen, Youxing
    Jin, Yong
    Wang, Wei
    Zhao, Xia
    [J]. IEEE ACCESS, 2020, 8 : 111278 - 111288
  • [8] Recursive attention collaboration network for single image de-raining
    Li, Zhitong
    Li, Xiaodong
    Gong, Zhaozhe
    Yu, Zhensheng
    [J]. IET CYBER-SYSTEMS AND ROBOTICS, 2024, 6 (02)
  • [9] Disentangled Representation Learning and Enhancement Network for Single Image De-Raining
    Wang, Guoqing
    Sun, Changming
    Xu, Xing
    Li, Jingjing
    Wang, Zheng
    Ma, Zeyu
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 3015 - 3023
  • [10] Fast Single Image De-raining via a Weighted Residual Network
    Zhuge, Ruibin
    Xia, Haiying
    Li, Haisheng
    Song, Shuxiang
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2018), PT VI, 2018, 11306 : 257 - 268