Frequency-Separated Attention Network for Image Super-Resolution

被引:1
|
作者
Qu, Daokuan [1 ,2 ]
Li, Liulian [3 ]
Yao, Rui [3 ]
机构
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Peoples R China
[2] Shandong Polytech Coll, Sch Energy & Mat Engn, Jining 272067, Peoples R China
[3] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 10期
关键词
densely connected structure; frequency-separated; channel-wise and spatial attention; image super-resolution;
D O I
10.3390/app14104238
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The use of deep convolutional neural networks has significantly improved the performance of super-resolution. Employing deeper networks to enhance the non-linear mapping capability from low-resolution (LR) to high-resolution (HR) images has inadvertently weakened the information flow and disrupted long-term memory. Moreover, overly deep networks are challenging to train, thus failing to exhibit the expressive capability commensurate with their depth. High-frequency and low-frequency features in images play different roles in image super-resolution. Networks based on CNNs, which should focus more on high-frequency features, treat these two types of features equally. This results in redundant computations when processing low-frequency features and causes complex and detailed parts of the reconstructed images to appear as smooth as the background. To maintain long-term memory and focus more on the restoration of image details in networks with strong representational capabilities, we propose the Frequency-Separated Attention Network (FSANet), where dense connections ensure the full utilization of multi-level features. In the Feature Extraction Module (FEM), the use of the Res ASPP Module expands the network's receptive field without increasing its depth. To differentiate between high-frequency and low-frequency features within the network, we introduce the Feature-Separated Attention Block (FSAB). Furthermore, to enhance the quality of the restored images using heuristic features, we incorporate attention mechanisms into the Low-Frequency Attention Block (LFAB) and the High-Frequency Attention Block (HFAB) for processing low-frequency and high-frequency features, respectively. The proposed network outperforms the current state-of-the-art methods in tests on benchmark datasets.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Efficient residual attention network for single image super-resolution
    Fangwei Hao
    Taiping Zhang
    Linchang Zhao
    Yuanyan Tang
    Applied Intelligence, 2022, 52 : 652 - 661
  • [42] A Novel Attention Enhanced Dense Network for Image Super-Resolution
    Niu, Zhong-Han
    Zhou, Yang-Hao
    Yang, Yu-Bin
    Fan, Jian-Cong
    MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 : 568 - 580
  • [43] Structured Fusion Attention Network for Image Super-Resolution Reconstruction
    Dai, Yaonan
    Yu, Jiuyang
    Hu, Tianhao
    Lu, Yang
    Zheng, Xiaotao
    IEEE ACCESS, 2022, 10 : 31896 - 31906
  • [44] Lightweight adaptive enhanced attention network for image super-resolution
    Li Wang
    Lizhong Xu
    Jianqiang Shi
    Jie Shen
    Fengcheng Huang
    Multimedia Tools and Applications, 2022, 81 : 6513 - 6537
  • [45] Dynamic dual attention iterative network for image super-resolution
    Feng, Hao
    Wang, Liejun
    Cheng, Shuli
    Du, Anyu
    Li, Yongming
    APPLIED INTELLIGENCE, 2022, 52 (07) : 8189 - 8208
  • [46] Lightweight Attention-Guided Network for Image Super-Resolution
    Ding, Zixuan
    Juan, Zhang
    Xiang, Li
    Wang, Xinyu
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (14)
  • [47] Lightweight adaptive enhanced attention network for image super-resolution
    Wang, Li
    Xu, Lizhong
    Shi, Jianqiang
    Shen, Jie
    Huang, Fengcheng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (05) : 6513 - 6537
  • [48] Hierarchical accumulation network with grid attention for image super-resolution
    Yang, Yue
    Qi, Yong
    KNOWLEDGE-BASED SYSTEMS, 2021, 233
  • [49] Single image super-resolution via a ternary attention network
    Lianping Yang
    Jian Tang
    Ben Niu
    Haoyue Fu
    Hegui Zhu
    Wuming Jiang
    Xin Wang
    Applied Intelligence, 2023, 53 : 13067 - 13081
  • [50] Image super-resolution reconstruction based on dynamic attention network
    Zhao X.-Q.
    Wang Z.
    Song Z.-Y.
    Jiang H.-M.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2023, 57 (08): : 1487 - 1494