A Model-Based GNN for Learning Precoding

被引:0
|
作者
Guo, Jia [1 ]
Yang, Chenyang [1 ]
机构
[1] Beihang Univ, Sch Elect & Informat Engn, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
Precoding; Training; Mathematical models; Complexity theory; Wireless communication; Graph neural networks; Signal to noise ratio; Graph neural network; model-based; permutation equivariance; precoding; matrix pseudo-inverse; GRAPH NEURAL-NETWORKS; OPTIMIZATION; ALLOCATION; SYSTEMS; DESIGN;
D O I
10.1109/TWC.2023.3336911
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Learning precoding policies with neural networks enables low complexity implementation, robustness to channel impairments, and joint optimization with channel acquisition. However, pure data-driven methods for learning precoding suffer from high complexity of training and poor generalizability to problem scales, while existing model-driven learning methods are either algorithm-specific or problem-specific. In this paper, we propose a model-based graph neural network (GNN) to learn precoding policies by harnessing their properties and relevant mathematical model. We first show that a vanilla GNN cannot learn zero-forcing precoding when the numbers of antennas and users are large, and is not generalizable to the numbers of users. Then, we conceive a new GNN structure by resorting to the iterative Taylor's expansion of matrix pseudo-inverse, which can adapt to the interference strength among users. Simulation results show that the proposed GNN can well-learn different precoding policies (say spectral efficient and energy efficient precoding policies as well as coordinated beamforming) with low training complexity. Moreover, it can be generalized to the number of users, which is highly desirable in practice since the number of scheduled users may change in milliseconds.
引用
收藏
页码:6983 / 6999
页数:17
相关论文
共 50 条
  • [1] Learning Precoding Policy: CNN or GNN?
    Zhao, Baichuan
    Guo, Jia
    Yang, Chenyang
    [J]. 2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 1027 - 1032
  • [2] How to Improve Learning Efficiency of GNN for Precoding?
    Guo, Jia
    Yang, Chenyang
    [J]. 2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [3] A Size-Generalizable GNN for Learning Precoding
    Guo, Jia
    Yang, Chenyang
    [J]. 2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [4] Characterizing the Communication Requirements of GNN Accelerators: A Model-Based Approach
    Guirado, Robert
    Jain, Akshay
    Abadal, Sergi
    Alarcon, Eduard
    [J]. 2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [5] Model Learning and Model-Based Testing
    Aichernig, Bernhard K.
    Mostowski, Wojciech
    Mousavi, Mohammad Reza
    Tappler, Martin
    Taromirad, Masoumeh
    [J]. MACHINE LEARNING FOR DYNAMIC SOFTWARE ANALYSIS: POTENTIALS AND LIMITS, 2018, 11026 : 74 - 100
  • [6] Model-Based Deep Learning
    Shlezinger, Nir
    Whang, Jay
    Eldar, Yonina C.
    Dimakis, Alexandros G.
    [J]. PROCEEDINGS OF THE IEEE, 2023, 111 (05) : 465 - 499
  • [7] Model-based machine learning
    Bishop, Christopher M.
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2013, 371 (1984):
  • [8] Model-Based Deep Learning
    Shlezinger, Nir
    Eldar, Yonina C.
    [J]. FOUNDATIONS AND TRENDS IN SIGNAL PROCESSING, 2023, 17 (04): : 291 - 416
  • [9] Model-based learning in surface inspection
    Stephani, Henrike
    Weibel, Thomas
    Moghiseh, Ali
    [J]. AT-AUTOMATISIERUNGSTECHNIK, 2017, 65 (06) : 406 - 415
  • [10] Model-based learning problem taxonomies
    Golden, RM
    [J]. BEHAVIORAL AND BRAIN SCIENCES, 1997, 20 (01) : 73 - &