Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration

被引:1
|
作者
Lee, Eunho [1 ]
Hwang, Youngbae [1 ]
机构
[1] Chungbuk Natl Univ, Dept Intelligent Syst & Robot, Cheongju 28644, South Korea
关键词
Low level vision; transformer; image restoration; attention module; denoising;
D O I
10.1109/ACCESS.2024.3375360
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A transformer architecture achieves outstanding performance in computer vision tasks based on the ability to capture long-range dependencies. However, a quadratic increase in complexity with respect to spatial resolution makes it impractical to apply for image restoration tasks. In this paper, we propose a Decomformer that efficiently captures global relationship by decomposing self-attention into linear combination of vectors and coefficients to reduce the heavy computational cost. This approximation not only reduces the complexity linearly, but also preserves the globality of the vanilla self-attention properly. Moreover, we apply a linear simple gate to represent the complex self-attention mechanism as the proposed decomposition directly. To show the effectiveness of our approach, we apply it to image restoration tasks including denoising, deblurring and deraining. The proposed decomposing scheme for self-attention in the Transformer achieves better or comparable results with state-of-the-arts as well as much more efficiency than most of previous approaches.
引用
收藏
页码:38672 / 38684
页数:13
相关论文
共 50 条
  • [41] Permutation invariant self-attention infused U-shaped transformer for medical image segmentation
    Patil, Sanjeet S.
    Ramteke, Manojkumar
    Rathore, Anurag S.
    NEUROCOMPUTING, 2025, 625
  • [42] Degradation-Aware Self-Attention Based Transformer for Blind Image Super-Resolution
    Liu, Qingguo
    Gao, Pan
    Han, Kang
    Liu, Ningzhong
    Xiang, Wei
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 7516 - 7528
  • [43] An Efficient Transformer Based on Global and Local Self-Attention for Face Photo-Sketch Synthesis
    Yu, Wangbo
    Zhu, Mingrui
    Wang, Nannan
    Wang, Xiaoyu
    Gao, Xinbo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 483 - 495
  • [44] Variational joint self-attention for image captioning
    Shao, Xiangjun
    Xiang, Zhenglong
    Li, Yuanxiang
    Zhang, Mingjie
    IET IMAGE PROCESSING, 2022, 16 (08) : 2075 - 2086
  • [45] Relation constraint self-attention for image captioning
    Ji, Junzhong
    Wang, Mingzhan
    Zhang, Xiaodan
    Lei, Minglong
    Qu, Liangqiong
    NEUROCOMPUTING, 2022, 501 : 778 - 789
  • [46] HIGSA: Human image generation with self-attention
    Wu, Haoran
    He, Fazhi
    Si, Tongzhen
    Duan, Yansong
    Yan, Xiaohu
    ADVANCED ENGINEERING INFORMATICS, 2023, 55
  • [47] Attention Guided CAM: Visual Explanations of Vision Transformer Guided by Self-Attention
    Leem, Saebom
    Seo, Hyunseok
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 2956 - 2964
  • [48] Self-Attention Attribution: Interpreting Information Interactions Inside Transformer
    Hao, Yaru
    Dong, Li
    Wei, Furu
    Xu, Ke
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 12963 - 12971
  • [49] Improving attention mechanisms in transformer architecture in image restoration
    Berezhnov, N. I.
    Sirota, A. A.
    COMPUTER OPTICS, 2024, 48 (05) : 726 - 733
  • [50] RSAFormer: A method of polyp segmentation with region self-attention transformer
    Yin X.
    Zeng J.
    Hou T.
    Tang C.
    Gan C.
    Jain D.K.
    García S.
    Computers in Biology and Medicine, 2024, 172