Enhancing GF-NOMA Spectral Efficiency Under Imperfections Using Deep Reinforcement Learning

被引:1
|
作者
Alajmi, Abdullah [1 ]
Ghandoura, Abdulrahman [2 ]
机构
[1] Prince Sattam Bin Abdulaziz Univ, Coll Business Adm, Al Kharj 16278, Saudi Arabia
[2] Umm Al Qura Univ, Appl Coll, Dept Engn & Appl Sci, Mecca 24382, Saudi Arabia
关键词
NOMA; Interference cancellation; Resource management; Spectral efficiency; Quality of service; Wireless communication; Optimization; Deep reinforcement learning; multi-carrier non-orthogonal multiple access; grant-free NOMA; NONORTHOGONAL RANDOM-ACCESS; RESOURCE-ALLOCATION; IOT NETWORKS; UPLINK NOMA; POWER; SCHEME;
D O I
10.1109/LCOMM.2024.3408083
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
In this letter, we present a deep reinforcement learning (DRL) based multi-carrier grant-free (GF) non-orthogonal multiple access (NOMA) scheme for Internet of Things networks to solve the power and sub-carrier allocation problem. Compared to existing work in this area, the proposed scheme is more practical, takes into account the imperfections in successive interference cancellation (SIC), and allows for unrestricted user sub-carrier selection. In the proposed DRL framework, each GF user acts as an agent and tries to find the optimal resources selection policy. To search for optimal policies, a good trade-off between exploration and exploitation is achieved. A 60% exploration and 40% exploitation provides better rewards. Numerical results show the significance of imperfection in the SIC on spectral efficiency. As compared to the benchmark schemes, the proposed scheme increases the user fairness up to 62.1% and outperform the single-carrier GF-NOMA in terms of spectral efficiency.
引用
收藏
页码:1870 / 1874
页数:5
相关论文
共 50 条
  • [21] Deep Reinforcement Learning Powered IRS-Assisted Downlink NOMA
    Shehab, Muhammad
    Ciftler, Bekir S.
    Khattab, Tamer
    Abdallah, Mohamed M.
    Trinchero, Daniele
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2022, 3 : 729 - 739
  • [22] Enhancing Conversational Model With Deep Reinforcement Learning and Adversarial Learning
    Tran, Quoc-Dai Luong
    Le, Anh-Cuong
    Huynh, Van-Nam
    IEEE ACCESS, 2023, 11 : 75955 - 75970
  • [23] An Efficiency Enhancing Methodology for Multiple Autonomous Vehicles in an Urban Network Adopting Deep Reinforcement Learning
    Quang-Duy Tran
    Bae, Sang-Hoon
    APPLIED SCIENCES-BASEL, 2021, 11 (04): : 1 - 18
  • [24] Enhancing SFC Placement with Parallelized Functions in MEC Using Deep Reinforcement Learning
    Cao, Manman
    Wang, Mian
    Sun, Hongwei
    IETE JOURNAL OF RESEARCH, 2024,
  • [25] AI Empowered RIS-Assisted NOMA Networks: Deep Learning or Reinforcement Learning?
    Zhong, Ruikang
    Liu, Yuanwei
    Mu, Xidong
    Chen, Yue
    Song, Lingyang
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (01) : 182 - 196
  • [26] Surrogate Models for Enhancing the Efficiency of Neuroevolution in Reinforcement Learning
    Stork, Joerg
    Zaefferer, Martin
    Bartz-Beielstein, Thomas
    Eiben, A. E.
    PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'19), 2019, : 934 - 942
  • [27] Intersection Navigation Under Dynamic Constraints Using Deep Reinforcement Learning
    Demir, Ali
    Sezer, Volkan
    2018 6TH INTERNATIONAL CONFERENCE ON CONTROL ENGINEERING & INFORMATION TECHNOLOGY (CEIT), 2018,
  • [28] Towards Deeper Deep Reinforcement Learning with Spectral Normalization
    Bjorck, Johan
    Gomes, Carla P.
    Weinberger, Kilian Q.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [29] Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective
    Gogianu, Florin
    Berariu, Tudor
    Rosca, Mihaela
    Clopath, Claudia
    Busoniu, Lucian
    Pascanu, Razvan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [30] Increasing sample efficiency in deep reinforcement learning using generative environment modelling
    Andersen, Per-Arne
    Goodwin, Morten
    Granmo, Ole-Christoffer
    EXPERT SYSTEMS, 2021, 38 (07)