Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration

被引:1
|
作者
Lee, Eunho [1 ]
Hwang, Youngbae [1 ]
机构
[1] Chungbuk Natl Univ, Dept Intelligent Syst & Robot, Cheongju 28644, South Korea
关键词
Low level vision; transformer; image restoration; attention module; denoising;
D O I
10.1109/ACCESS.2024.3375360
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A transformer architecture achieves outstanding performance in computer vision tasks based on the ability to capture long-range dependencies. However, a quadratic increase in complexity with respect to spatial resolution makes it impractical to apply for image restoration tasks. In this paper, we propose a Decomformer that efficiently captures global relationship by decomposing self-attention into linear combination of vectors and coefficients to reduce the heavy computational cost. This approximation not only reduces the complexity linearly, but also preserves the globality of the vanilla self-attention properly. Moreover, we apply a linear simple gate to represent the complex self-attention mechanism as the proposed decomposition directly. To show the effectiveness of our approach, we apply it to image restoration tasks including denoising, deblurring and deraining. The proposed decomposing scheme for self-attention in the Transformer achieves better or comparable results with state-of-the-arts as well as much more efficiency than most of previous approaches.
引用
收藏
页码:38672 / 38684
页数:13
相关论文
共 50 条
  • [31] A self-attention driven retinex-based deep image prior model for satellite image restoration
    Shastry, Architha
    Jidesh, P.
    George, Santhosh
    Bini, A. A.
    OPTICS AND LASERS IN ENGINEERING, 2024, 173
  • [32] Multi-scale self-attention generative adversarial network for pathology image restoration
    Meiyan Liang
    Qiannan Zhang
    Guogang Wang
    Na Xu
    Lin Wang
    Haishun Liu
    Cunlin Zhang
    The Visual Computer, 2023, 39 : 4305 - 4321
  • [33] Region Attention Transformer for Medical Image Restoration
    Yang, Zhiwen
    Chen, Haowei
    Qian, Ziniu
    Zhou, Yang
    Zhang, Hui
    Zhao, Dan
    Wei, Bingzheng
    Xu, Yan
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT VII, 2024, 15007 : 603 - 613
  • [34] Local self-attention in transformer for visual question answering
    Shen, Xiang
    Han, Dezhi
    Guo, Zihan
    Chen, Chongqing
    Hua, Jie
    Luo, Gaofeng
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16706 - 16723
  • [35] Local self-attention in transformer for visual question answering
    Xiang Shen
    Dezhi Han
    Zihan Guo
    Chongqing Chen
    Jie Hua
    Gaofeng Luo
    Applied Intelligence, 2023, 53 : 16706 - 16723
  • [36] Vision Transformer Based on Reconfigurable Gaussian Self-attention
    Zhao L.
    Zhou J.-K.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (09): : 1976 - 1988
  • [37] Tree Transformer: Integrating Tree Structures into Self-Attention
    Wang, Yau-Shian
    Lee, Hung-Yi
    Chen, Yun-Nung
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 1061 - 1070
  • [38] A lightweight transformer with linear self-attention for defect recognition
    Zhai, Yuwen
    Li, Xinyu
    Gao, Liang
    Gao, Yiping
    ELECTRONICS LETTERS, 2024, 60 (17)
  • [39] Transformer Self-Attention Network for Forecasting Mortality Rates
    Roshani, Amin
    Izadi, Muhyiddin
    Khaledi, Baha-Eldin
    JIRSS-JOURNAL OF THE IRANIAN STATISTICAL SOCIETY, 2022, 21 (01): : 81 - 103
  • [40] Keyword Transformer: A Self-Attention Model for Keyword Spotting
    Berg, Axel
    O'Connor, Mark
    Cruz, Miguel Tairum
    INTERSPEECH 2021, 2021, : 4249 - 4253