Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration

被引:1
|
作者
Lee, Eunho [1 ]
Hwang, Youngbae [1 ]
机构
[1] Chungbuk Natl Univ, Dept Intelligent Syst & Robot, Cheongju 28644, South Korea
关键词
Low level vision; transformer; image restoration; attention module; denoising;
D O I
10.1109/ACCESS.2024.3375360
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A transformer architecture achieves outstanding performance in computer vision tasks based on the ability to capture long-range dependencies. However, a quadratic increase in complexity with respect to spatial resolution makes it impractical to apply for image restoration tasks. In this paper, we propose a Decomformer that efficiently captures global relationship by decomposing self-attention into linear combination of vectors and coefficients to reduce the heavy computational cost. This approximation not only reduces the complexity linearly, but also preserves the globality of the vanilla self-attention properly. Moreover, we apply a linear simple gate to represent the complex self-attention mechanism as the proposed decomposition directly. To show the effectiveness of our approach, we apply it to image restoration tasks including denoising, deblurring and deraining. The proposed decomposing scheme for self-attention in the Transformer achieves better or comparable results with state-of-the-arts as well as much more efficiency than most of previous approaches.
引用
收藏
页码:38672 / 38684
页数:13
相关论文
共 50 条
  • [21] Research of Self-Attention in Image Segmentation
    Cao, Fude
    Zheng, Chunguang
    Huang, Limin
    Wang, Aihua
    Zhang, Jiong
    Zhou, Feng
    Ju, Haoxue
    Guo, Haitao
    Du, Yuxia
    JOURNAL OF INFORMATION TECHNOLOGY RESEARCH, 2022, 15 (01)
  • [22] Improve Image Captioning by Self-attention
    Li, Zhenru
    Li, Yaoyi
    Lu, Hongtao
    NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 91 - 98
  • [23] Densely Connected Transformer With Linear Self-Attention for Lightweight Image Super-Resolution
    Zeng, Kun
    Lin, Hanjiang
    Yan, Zhiqiang
    Fang, Jinsheng
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [24] Group-spectral superposition and position self-attention transformer for hyperspectral image classification
    Zhang, Weitong
    Hu, Mingwei
    Hou, Sihan
    Shang, Ronghua
    Feng, Jie
    Xu, Songhua
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 265
  • [25] Self-Attention Technology in Image Segmentation
    Cao, Fude
    Lu, Xueyun
    INTERNATIONAL CONFERENCE ON INTELLIGENT TRAFFIC SYSTEMS AND SMART CITY (ITSSC 2021), 2022, 12165
  • [26] RsMmFormer: Multimodal Transformer Using Multiscale Self-attention for Remote Sensing Image Classification
    Zhang, Bo
    Ming, Zuheng
    Liu, Yaqian
    Feng, Wei
    He, Liang
    Zhao, Kaixing
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT I, 2024, 14473 : 329 - 339
  • [27] Efficient brain tumor segmentation using Swin transformer and enhanced local self-attention
    Fethi Ghazouani
    Pierre Vera
    Su Ruan
    International Journal of Computer Assisted Radiology and Surgery, 2024, 19 : 273 - 281
  • [28] Efficient brain tumor segmentation using Swin transformer and enhanced local self-attention
    Ghazouani, Fethi
    Vera, Pierre
    Ruan, Su
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 19 (2) : 273 - 281
  • [29] Multi-scale self-attention generative adversarial network for pathology image restoration
    Liang, Meiyan
    Zhang, Qiannan
    Wang, Guogang
    Xu, Na
    Wang, Lin
    Liu, Haishun
    Zhang, Cunlin
    VISUAL COMPUTER, 2023, 39 (09): : 4305 - 4321
  • [30] Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention
    Pan, Xuran
    Ye, Tianzhu
    Xia, Zhuofan
    Song, Shiji
    Huang, Gao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2082 - 2091