NOMA Codebook Optimization by Batch Gradient Descent

被引:8
|
作者
Si, Zhongwei [1 ]
Wen, Shaoguo [1 ]
Dong, Bing [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Key Lab Universal Wireless Commun, Minist Educ, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
Batch gradient descent; codebook optimization; neural network; non-orthogonal multiple access; pairwise error probability;
D O I
10.1109/ACCESS.2019.2936483
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The non-orthogonal multiple access (NOMA) has the potential to improve the spectrum efficiency and the user connectivity compared to the orthogonal schemes. The codebook design is crucial for the the performance of the NOMA system. In this paper, we comprehensively investigate the NOMA codebook design involving the characteristics from multiple signal domains. The minimizing of the pairwise error probability is considered as the target of the optimization. The neural network framework is explored for the optimization, and the mapping functions on the edges are considered as weights. The method of batch gradient descent is applied for optimizing the weights and correspondingly the codebook. The simulation results reveal that with the optimized codebook the error performance is significantly improved compared to the schemes in the literature.
引用
收藏
页码:117274 / 117281
页数:8
相关论文
共 50 条
  • [21] Stochastic gradient descent for optimization for nuclear systems
    Williams, Austin
    Walton, Noah
    Maryanski, Austin
    Bogetic, Sandra
    Hines, Wes
    Sobes, Vladimir
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)
  • [22] Distributed Optimization with Gradient Descent and Quantized Communication
    Rikos, Apostolos I.
    Jiang, Wei
    Charalambous, Themistoklis
    Johansson, Karl H.
    [J]. IFAC PAPERSONLINE, 2023, 56 (02): : 5900 - 5906
  • [23] Network revenue management with online inverse batch gradient descent method
    Chen, Yiwei
    Shi, Cong
    [J]. PRODUCTION AND OPERATIONS MANAGEMENT, 2023, 32 (07) : 2123 - 2137
  • [24] Stochastic normalized gradient descent with momentum for large-batch training
    Shen-Yi ZHAO
    Chang-Wei SHI
    Yin-Peng XIE
    Wu-Jun LI
    [J]. Science China(Information Sciences)., 2024, 67 (11) - 91
  • [25] Statistical Analysis of Fixed Mini-Batch Gradient Descent Estimator
    Qi, Haobo
    Wang, Feifei
    Wang, Hansheng
    [J]. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2023, 32 (04) : 1348 - 1360
  • [26] aSGD: Stochastic Gradient Descent with Adaptive Batch Size for Every Parameter
    Shi, Haoze
    Yang, Naisen
    Tang, Hong
    Yang, Xin
    [J]. MATHEMATICS, 2022, 10 (06)
  • [27] HYPERSPECTRAL UNMIXING VIA PROJECTED MINI-BATCH GRADIENT DESCENT
    Li, Jing
    Li, Xiaorun
    Zhao, Liaoying
    [J]. 2017 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2017, : 1133 - 1136
  • [28] Online convex optimization in the bandit setting: gradient descent without a gradient
    Flaxman, Abraham D.
    Kalai, Adam Tauman
    McMahan, H. Brendan
    [J]. PROCEEDINGS OF THE SIXTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2005, : 385 - 394
  • [29] A conjugate gradient method with descent direction for unconstrained optimization
    Yuan, Gonglin
    Lu, Xiwen
    Wei, Zengxin
    [J]. JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2009, 233 (02) : 519 - 530
  • [30] Learning Deep Gradient Descent Optimization for Image Deconvolution
    Gong, Dong
    Zhang, Zhen
    Shi, Qinfeng
    van den Hengel, Anton
    Shen, Chunhua
    Zhang, Yanning
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) : 5468 - 5482