Pyramid Channel-based Feature Attention Network for image dehazing

被引:154
|
作者
Zhang, Xiaoqin [1 ]
Wang, Tao [1 ]
Wang, Jinxin [1 ]
Tang, Guiying [1 ]
Zhao, Li [1 ]
机构
[1] Wenzhou Univ, Coll Comp Sci & Artificial Intelligence, Wenzhou 325035, Peoples R China
基金
中国国家自然科学基金;
关键词
Image dehazing; Deep neural network; Channel attention; MODEL;
D O I
10.1016/j.cviu.2020.103003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traditional deep learning-based image dehazing methods usually use the high-level features (which contain more semantic information) to remove haze in the input image, while ignoring the low-level features (which contain more detail information). In this paper, a Pyramid Channel-based Feature Attention Network (PCFAN) is proposed for single image dehazing, which leverages complementarity among different level features in a pyramid manner with channel attention mechanism. PCFAN consists of three modules: a three-scale feature extraction module, a pyramid channel-based feature attention module (PCFA), and an image reconstruction module. The three-scale feature extraction module simultaneously captures the low-level spatial structural features and the high-level contextual features in different scales. The PCFA module utilizes the feature pyramid and the channel attention mechanism, which effectively extracts interdependent channel maps and selectively aggregates the more important features in a pyramid manner for image dehazing. The image reconstruction module is used to reconstruct features to recover a clear image. Meanwhile, a loss function that combines a mean square error loss part and an edge loss part is employed in PCFAN, which can better preserve image details. Experimental results demonstrate that the proposed PCFAN outperforms existing state-of-the-art algorithms on standard benchmark datasets in terms of accuracy, efficiency, and visual effect. The code will be made publicly available.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Multi-scale feature fusion pyramid attention network for single image dehazing
    Liu, Jianlei
    Liu, Peng
    Zhang, Yuanke
    [J]. IET IMAGE PROCESSING, 2023, 17 (09) : 2726 - 2735
  • [2] Pyramid feature boosted network for single image dehazing
    Guangrui Hu
    Anhui Tan
    Liangtian He
    Haozhen Shen
    Hongming Chen
    Chao Wang
    Huandi Du
    [J]. International Journal of Machine Learning and Cybernetics, 2023, 14 : 2099 - 2110
  • [3] Pyramid feature boosted network for single image dehazing
    Hu, Guangrui
    Tan, Anhui
    He, Liangtian
    Shen, Haozhen
    Chen, Hongming
    Wang, Chao
    Du, Huandi
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (06) : 2099 - 2110
  • [4] Single Image Dehazing Network Based on Serial Feature Attention
    Lu, Yan
    Liao, Miao
    Di, Shuanhu
    Zhao, Yuqian
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 123 - 135
  • [5] Feature Fusion Image Dehazing Network Based on Hybrid Parallel Attention
    Chen, Hong
    Chen, Mingju
    Li, Hongyang
    Peng, Hongming
    Su, Qin
    [J]. ELECTRONICS, 2024, 13 (17)
  • [6] FEATURE AGGREGATION ATTENTION NETWORK FOR SINGLE IMAGE DEHAZING
    Yan, Lan
    Zheng, Wenbo
    Gou, Chao
    Wang, Fei-Yue
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 923 - 927
  • [7] PSPAN:pyramid spatially weighted pixel attention network for image dehazing
    YuBo Zhang
    Tongxiang Xu
    Kang Tian
    [J]. Multimedia Tools and Applications, 2024, 83 : 11367 - 11385
  • [8] PSPAN:pyramid spatially weighted pixel attention network for image dehazing
    Zhang, YuBo
    Xu, Tongxiang
    Tian, Kang
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (04) : 11367 - 11385
  • [9] Multi-channel feature fusion attention Dehazing network
    Zou, Changjun
    Xu, Hangbin
    Ye, Lintao
    [J]. PLOS ONE, 2023, 18 (08):
  • [10] Haze Relevant Feature Attention Network for Single Image Dehazing
    Jiang, Xin
    Lu, Lu
    Zhu, Ming
    Hao, Zhicheng
    Gao, Wen
    [J]. IEEE ACCESS, 2021, 9 : 106476 - 106488