Single Image Super Resolution via Multi-Attention Fusion Recurrent Network

被引:3
|
作者
Kou, Qiqi [1 ]
Cheng, Deqiang [2 ]
Zhang, Haoxiang [2 ]
Liu, Jingjing [2 ]
Guo, Xin [3 ]
Jiang, He [2 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Peoples R China
[2] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou, Peoples R China
[3] Huawei Hangzhou Res Inst, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Super resolution; multiplexing-based; attention fusion mechanism; recurrent network; SUPERRESOLUTION;
D O I
10.1109/ACCESS.2023.3314196
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep convolutional neural networks have significantly enhanced the performance of single image super-resolution in recent years. However, the majority of the proposed networks are single-channel, making it challenging to fully exploit the advantages of neural networks in feature extraction. This paper proposes a Multi-attention Fusion Recurrent Network (MFRN), which is a multiplexing architecture-based network. Firstly, the algorithm reuses the feature extraction part to construct the recurrent network. This technology reduces the number of network parameters, accelerates training, and captures rich features simultaneously. Secondly, a multiplexing-based structure is employed to obtain deep information features, which alleviates the issue of feature loss during transmission. Thirdly, an attention fusion mechanism is incorporated into the neural network to fuse channel attention and pixel attention information. This fusion mechanism effectively enhances the expressive power of each layer of the neural network. Compared with other algorithms, our MFRN not only exhibits superior visual performance but also achieves favorable results in objective evaluations. It generates images with sharper structure and texture details and achieves higher scores in quantitative tests such as image quality assessment.
引用
收藏
页码:98653 / 98665
页数:13
相关论文
共 50 条
  • [1] Multi-attention augmented network for single image super-resolution
    Chen, Rui
    Zhang, Heng
    Liu, Jixin
    PATTERN RECOGNITION, 2022, 122
  • [2] Multi-attention fusion transformer for single-image super-resolution
    Li, Guanxing
    Cui, Zhaotong
    Li, Meng
    Han, Yu
    Li, Tianping
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [3] Multi-Attention Residual Network for Image Super Resolution
    Chang, Qing
    Jia, Xiaotian
    Lu, Chenhao
    Ye, Jian
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (08)
  • [4] Single image deraining via a recurrent multi-attention enhancement network
    Liu, Yuetong
    Zhang, Rui
    Zhang, Yunfeng
    Yao, Xunxiang
    Han, Huijian
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 113
  • [5] A Multi-Attention Feature Distillation Neural Network for Lightweight Single Image Super-Resolution
    Zhang, Yongfei
    Lin, Xinying
    Yang, Hong
    He, Jie
    Qing, Linbo
    He, Xiaohai
    Li, Yi
    Chen, Honggang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2024, 2024
  • [6] Gated Multi-Attention Feedback Network for Medical Image Super-Resolution
    Shang, Jianrun
    Zhang, Xue
    Zhang, Guisheng
    Song, Wenhao
    Chen, Jinyong
    Li, Qilei
    Gao, Mingliang
    ELECTRONICS, 2022, 11 (21)
  • [7] Multi-feature fusion attention network for single image super-resolution
    Chen, Jiacheng
    Wang, Wanliang
    Xing, Fangsen
    Tu, Hangyao
    IET IMAGE PROCESSING, 2023, 17 (05) : 1389 - 1402
  • [8] PYRAMID FUSION ATTENTION NETWORK FOR SINGLE IMAGE SUPER-RESOLUTION
    He, Hao
    Du, Zongcai
    Li, Wenfeng
    Tang, Jie
    Wu, Gangshan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2165 - 2169
  • [9] MAFUNet: Multi-Attention Fusion Network for Medical Image Segmentation
    Wang, Lili
    Zhao, Jiayu
    Yang, Hailu
    IEEE ACCESS, 2023, 11 : 109793 - 109802
  • [10] MACFNet: multi-attention complementary fusion network for image denoising
    Yu, Jiaolong
    Zhang, Juan
    Gao, Yongbin
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16747 - 16761