Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration

被引:1
|
作者
Lee, Eunho [1 ]
Hwang, Youngbae [1 ]
机构
[1] Chungbuk Natl Univ, Dept Intelligent Syst & Robot, Cheongju 28644, South Korea
关键词
Low level vision; transformer; image restoration; attention module; denoising;
D O I
10.1109/ACCESS.2024.3375360
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A transformer architecture achieves outstanding performance in computer vision tasks based on the ability to capture long-range dependencies. However, a quadratic increase in complexity with respect to spatial resolution makes it impractical to apply for image restoration tasks. In this paper, we propose a Decomformer that efficiently captures global relationship by decomposing self-attention into linear combination of vectors and coefficients to reduce the heavy computational cost. This approximation not only reduces the complexity linearly, but also preserves the globality of the vanilla self-attention properly. Moreover, we apply a linear simple gate to represent the complex self-attention mechanism as the proposed decomposition directly. To show the effectiveness of our approach, we apply it to image restoration tasks including denoising, deblurring and deraining. The proposed decomposing scheme for self-attention in the Transformer achieves better or comparable results with state-of-the-arts as well as much more efficiency than most of previous approaches.
引用
收藏
页码:38672 / 38684
页数:13
相关论文
共 50 条
  • [1] Dual-former: Hybrid self-attention transformer for efficient image restoration
    Chen, Sixiang
    Ye, Tian
    Liu, Yun
    Chen, Erkang
    DIGITAL SIGNAL PROCESSING, 2024, 149
  • [2] PSNet: Towards Efficient Image Restoration With Self-Attention
    Cui, Yuning
    Knoll, Alois
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 5735 - 5742
  • [3] Singularformer: Learning to Decompose Self-Attention to Linearize the Complexity of Transformer
    Wu, Yifan
    Kan, Shichao
    Zeng, Min
    Li, Min
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4433 - 4441
  • [4] Sparse self-attention transformer for image inpainting
    Huang, Wenli
    Deng, Ye
    Hui, Siqi
    Wu, Yang
    Zhou, Sanping
    Wang, Jinjun
    PATTERN RECOGNITION, 2024, 145
  • [5] Transformer with sparse self-attention mechanism for image captioning
    Wang, Duofeng
    Hu, Haifeng
    Chen, Dihu
    ELECTRONICS LETTERS, 2020, 56 (15) : 764 - +
  • [6] Efficient memristor accelerator for transformer self-attention functionality
    Bettayeb, Meriem
    Halawani, Yasmin
    Khan, Muhammad Umair
    Saleh, Hani
    Mohammad, Baker
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [7] An efficient parallel self-attention transformer for CSI feedback
    Liu, Ziang
    Song, Tianyu
    Zhao, Ruohan
    Jin, Jiyu
    Jin, Guiyue
    PHYSICAL COMMUNICATION, 2024, 66
  • [8] A Global Self-Attention Memristive Neural Network for Image Restoration
    Zhang, Wenhao
    Xiao, He
    Xie, Dirui
    Zhou, Yue
    Duan, Shukai
    Hu, Xiaofang
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (03): : 2613 - 2624
  • [9] Wavelet Frequency Division Self-Attention Transformer Image Deraining Network
    Fang, Siyan
    Liu, Bin
    Computer Engineering and Applications, 2024, 60 (06) : 259 - 273
  • [10] Relative molecule self-attention transformer
    Łukasz Maziarka
    Dawid Majchrowski
    Tomasz Danel
    Piotr Gaiński
    Jacek Tabor
    Igor Podolak
    Paweł Morkisz
    Stanisław Jastrzębski
    Journal of Cheminformatics, 16