Frequency-Separated Attention Network for Image Super-Resolution

被引:1
|
作者
Qu, Daokuan [1 ,2 ]
Li, Liulian [3 ]
Yao, Rui [3 ]
机构
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Peoples R China
[2] Shandong Polytech Coll, Sch Energy & Mat Engn, Jining 272067, Peoples R China
[3] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 10期
关键词
densely connected structure; frequency-separated; channel-wise and spatial attention; image super-resolution;
D O I
10.3390/app14104238
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The use of deep convolutional neural networks has significantly improved the performance of super-resolution. Employing deeper networks to enhance the non-linear mapping capability from low-resolution (LR) to high-resolution (HR) images has inadvertently weakened the information flow and disrupted long-term memory. Moreover, overly deep networks are challenging to train, thus failing to exhibit the expressive capability commensurate with their depth. High-frequency and low-frequency features in images play different roles in image super-resolution. Networks based on CNNs, which should focus more on high-frequency features, treat these two types of features equally. This results in redundant computations when processing low-frequency features and causes complex and detailed parts of the reconstructed images to appear as smooth as the background. To maintain long-term memory and focus more on the restoration of image details in networks with strong representational capabilities, we propose the Frequency-Separated Attention Network (FSANet), where dense connections ensure the full utilization of multi-level features. In the Feature Extraction Module (FEM), the use of the Res ASPP Module expands the network's receptive field without increasing its depth. To differentiate between high-frequency and low-frequency features within the network, we introduce the Feature-Separated Attention Block (FSAB). Furthermore, to enhance the quality of the restored images using heuristic features, we incorporate attention mechanisms into the Low-Frequency Attention Block (LFAB) and the High-Frequency Attention Block (HFAB) for processing low-frequency and high-frequency features, respectively. The proposed network outperforms the current state-of-the-art methods in tests on benchmark datasets.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] A Frequency-Separated 3D-CNN for Hyperspectral Image Super-Resolution
    Wang, Liguo
    Bi, Tianyi
    Shi, Yao
    IEEE ACCESS, 2020, 8 : 86367 - 86379
  • [2] Lightweight frequency-based attention network for image super-resolution
    Tang, E.
    Wang, Li
    Wang, Yuanyuan
    Yu, Yongtao
    Zeng, Xiaoqin
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (05)
  • [3] Adaptive Attention Network for Image Super-resolution
    Chen Y.-M.
    Zhou D.-W.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (08): : 1950 - 1960
  • [4] Frequency Separation Network for Image Super-Resolution
    Li, Shanshan
    Cai, Qiang
    Li, Haisheng
    Cao, Jian
    Wang, Lei
    Li, Zhuangzi
    IEEE ACCESS, 2020, 8 : 33768 - 33777
  • [5] Context Reasoning Attention Network for Image Super-Resolution
    Zhang, Yulun
    Wei, Donglai
    Qin, Can
    Wang, Huan
    Pfister, Hanspeter
    Fu, Yun
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 4258 - 4267
  • [6] Residual shuffle attention network for image super-resolution
    Li, Xuanyi
    Shao, Zhuhong
    Li, Bicao
    Shang, Yuanyuan
    Wu, Jiasong
    Duan, Yuping
    MACHINE VISION AND APPLICATIONS, 2023, 34 (05)
  • [7] Attention mechanism feedback network for image super-resolution
    Chen, Xiao
    Jing, Ruyun
    Suna, Chaowen
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (04)
  • [8] Pyramid Attention Dense Network for Image Super-Resolution
    Chen, Si-Bao
    Hu, Chao
    Luo, Bin
    Ding, Chris H. Q.
    Huang, Shi-Lei
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [9] Augmented global attention network for image super-resolution
    Du, Xiaobiao
    Jiang, Saibiao
    Liu, Jie
    IET IMAGE PROCESSING, 2022, 16 (02) : 567 - 575
  • [10] Stratified attention dense network for image super-resolution
    Zhiwei Liu
    Xiaofeng Mao
    Ji Huang
    Menghan Gan
    Yueyuan Zhang
    Signal, Image and Video Processing, 2022, 16 : 715 - 722