Hybrid CNN-transformer network for efficient CSI feedback

被引:0
|
作者
Zhao, Ruohan [1 ]
Liu, Ziang [1 ]
Song, Tianyu [1 ]
Jin, Jiyu [1 ]
Jin, Guiyue [1 ]
Fan, Lei [1 ]
机构
[1] Dalian Polytech Univ, Sch Informat Sci & Engn, Dalian 116034, Peoples R China
关键词
CSI feedback; Massive MIMO; Self-attention; Transformer; Convolutional neural networks; Deep learning;
D O I
10.1016/j.phycom.2024.102477
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, many deep learning-based methods have been utilized for the feedback of Channel State Information (CSI) in massive MIMO systems. The Transformer-based networks leverage global self-attention mechanisms that can effectively capture remote correlations between antennas, while Convolutional Neural Networks (CNNs) excel in acquiring local information. To balance the advantages of both, this paper proposes an Efficient Feature Aggregation Network called EFANet, which hybrid CNNs and Transformer. Specifically, we propose a Refined Window Multi-head Self-Attention (RW-MSA) through hybrid Convolutional Embedding Unit (CEU) and Window Multi-head Self-Attention (W-MSA) to reduce information loss between windows and achieve efficient feature aggregation. Additionally, we develop a Local Enhanced Feedforward Network (LEFN) to further integrate local information in the CSI matrix and model detailed features of different regions. Finally, the Compensation Unit (CU) is designed to further compensate for global-local features in the CSI matrix. Through the above design, the global and local features are fully interactive to reduce information loss. Numerous experiments have shown that the proposed method achieves better CSI reconstruction performance while reducing computational complexity.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Multi-level wavelet network based on CNN-Transformer hybrid attention for single image deraining
    Liu, Bin
    Fang, Siyan
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (30): : 22387 - 22404
  • [42] SWFormer: A scale-wise hybrid CNN-Transformer network for multi-classes weed segmentation
    Jiang, Hongkui
    Chen, Qiupu
    Wang, Rujing
    Du, Jianming
    Chen, Tianjiao
    [J]. JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (07)
  • [43] DBCT-Net:A dual branch hybrid CNN-transformer network for remote sensing image fusion
    Wang, Quanli
    Jin, Xin
    Jiang, Qian
    Wu, Liwen
    Zhang, Yunchun
    Zhou, Wei
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 233
  • [44] Multi-level wavelet network based on CNN-Transformer hybrid attention for single image deraining
    Bin Liu
    Siyan Fang
    [J]. Neural Computing and Applications, 2023, 35 : 22387 - 22404
  • [45] Rethinking Image Deblurring via CNN-Transformer Multiscale Hybrid Architecture
    Zhao, Qian
    Yang, Hao
    Zhou, Dongming
    Cao, Jinde
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [46] A CNN-transformer hybrid approach for decoding visual neural activity into text
    Zhang, Jiang
    Li, Chen
    Liu, Ganwanming
    Min, Min
    Wang, Chong
    Li, Jiyi
    Wang, Yuting
    Yan, Hongmei
    Zuo, Zhentao
    Huang, Wei
    Chen, Huafu
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2022, 214
  • [47] Rethinking Image Deblurring via CNN-Transformer Multiscale Hybrid Architecture
    Zhao, Qian
    Yang, Hao
    Zhou, Dongming
    Cao, Jinde
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [48] CNN-Transformer for Microseismic Signal Classification
    Zhang, Xingli
    Wang, Xiaohong
    Zhang, Zihan
    Wang, Zhihui
    [J]. ELECTRONICS, 2023, 12 (11)
  • [49] CNN-TRANSFORMER WITH SELF-ATTENTION NETWORK FOR SOUND EVENT DETECTION
    Wakayama, Keigo
    Saito, Shoichiro
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 806 - 810
  • [50] TACT: Text attention based CNN-Transformer network for polyp segmentation
    Zhao, Yiyang
    Li, Jinjiang
    Hua, Zhen
    [J]. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (02)