An efficient parallel self-attention transformer for CSI feedback

被引:0
|
作者
Liu, Ziang [1 ]
Song, Tianyu [1 ]
Zhao, Ruohan [1 ]
Jin, Jiyu [1 ]
Jin, Guiyue [1 ]
机构
[1] Dalian Polytech Univ, Sch Informat Sci & Engn, Dalian 116034, Peoples R China
关键词
CSI feedback; Massive MIMO; Self-attention; Transformer; Deep learning; MASSIVE MIMO;
D O I
10.1016/j.phycom.2024.102483
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In massive multi-input multi-output (MIMO) systems, it is necessary for user equipment (UE) to transmit downlink channel state information (CSI) back to the base station (BS). As the number of antennas increases, the feedback overhead of CSI consumes a significant amount of uplink bandwidth resources. To minimize the bandwidth overhead, we propose an efficient parallel attention transformer, called EPAformer, a lightweight network that utilizes the transformer architecture and efficient parallel self-attention (EPSA) for CSI feedback tasks. The EPSA expands the attention area of each token within the transformer block effectively by dividing multiple heads into parallel groups and conducting self-attention in horizontal and vertical stripes. The proposed EPSA achieves better feature compression and reconstruction. The simulation results display that the EPAformer surpasses previous deep learning-based approaches in terms of reconstruction performance and complexity.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration
    Lee, Eunho
    Hwang, Youngbae
    [J]. IEEE ACCESS, 2024, 12 : 38672 - 38684
  • [2] Massive MIMO-FDD self-attention CSI feedback network for outdoor environments
    Wang, Linyu
    Cao, Yize
    Xiang, Jianhong
    Jiang, Hanyu
    Zhong, Yu
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (6-7) : 5511 - 5518
  • [3] Relative molecule self-attention transformer
    Łukasz Maziarka
    Dawid Majchrowski
    Tomasz Danel
    Piotr Gaiński
    Jacek Tabor
    Igor Podolak
    Paweł Morkisz
    Stanisław Jastrzębski
    [J]. Journal of Cheminformatics, 16
  • [4] Relative molecule self-attention transformer
    Maziarka, Lukasz
    Majchrowski, Dawid
    Danel, Tomasz
    Gainski, Piotr
    Tabor, Jacek
    Podolak, Igor
    Morkisz, Pawel
    Jastrzebski, Stanislaw
    [J]. JOURNAL OF CHEMINFORMATICS, 2024, 16 (01)
  • [5] Light-Weight Vision Transformer with Parallel Local and Global Self-Attention
    Ebert, Nikolas
    Reichardt, Laurenz
    Stricker, Didier
    Wasenmueller, Oliver
    [J]. 2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 452 - 459
  • [6] PLG-ViT: Vision Transformer with Parallel Local and Global Self-Attention
    Ebert, Nikolas
    Stricker, Didier
    Wasenmueller, Oliver
    [J]. SENSORS, 2023, 23 (07)
  • [7] Dual-former: Hybrid self-attention transformer for efficient image restoration
    Chen, Sixiang
    Ye, Tian
    Liu, Yun
    Chen, Erkang
    [J]. DIGITAL SIGNAL PROCESSING, 2024, 149
  • [8] Universal Graph Transformer Self-Attention Networks
    Dai Quoc Nguyen
    Tu Dinh Nguyen
    Dinh Phung
    [J]. COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 193 - 196
  • [9] Sparse self-attention transformer for image inpainting
    Huang, Wenli
    Deng, Ye
    Hui, Siqi
    Wu, Yang
    Zhou, Sanping
    Wang, Jinjun
    [J]. PATTERN RECOGNITION, 2024, 145
  • [10] SST: self-attention transformer for infrared deconvolution
    Gao, Lei
    Yan, Xiaohong
    Deng, Lizhen
    Xu, Guoxia
    Zhu, Hu
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2024, 140