Deep Reinforcement Learning Based Link Adaptation Technique for LTE/NR Systems

被引:11
|
作者
Ye, Xiaowen [1 ,2 ]
Yu, Yiding [3 ]
Fu, Liqun [1 ,2 ]
机构
[1] Xiamen Univ, Sch Informat, Xiamen, Peoples R China
[2] Xiamen Univ, Key Lab Underwater Acoust Commun & Marine Informat, Minist Educ, Xiamen, Peoples R China
[3] Chinese Univ Hong Kong, Dept Informat Engn, Hong Kong, Peoples R China
关键词
Link adaptation; deep reinforcement learning; channel quality indicator; modulation and coding scheme; ADAPTIVE MODULATION; SELECTION; NETWORKS;
D O I
10.1109/TVT.2023.3236791
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Outdated channel quality indicator (CQI) feedback causes severe performance degradation of traditional link adaptation (LA) techniques in long term evolution (LTE) and new radio (NR) systems. This paper puts forth a deep reinforcement learning (DRL) based link adaptation (LA) technique, referred to as deep reinforcement learning link adaptation (DRLLA), to select efficient modulation and coding scheme (MCS) in the presence of the outdated CQI feedback. The goal of DRLLA is to maximize the link throughput while achieving a low block error rate (BLER). We first give explicit definitions of state, action, and reward in DRL paradigms, thereby realizing DRLLA. Then, to trade off the throughput against the BLER, we further develop a new experience replay mechanism called classified experience replay (CER) as the underpinning technique in DRLLA. In CER, experiences are separated into two buckets, one for successful experiences and the other for failed experiences, and then a fixed proportion from each is sampled to replay. The essence of CER is to obtain different trade-offs via adjusting the proportion among different training experiences. Furthermore, to reduce the signaling overhead and the system reconfiguration cost caused by frequent MCS switching, we propose a new action selection strategy termed as switching controlled e-greedy (SC -e-greedy) for DRLLA. Simulation results demonstrate that compared with the state-of-the-art OLLA, LTSLA, and DRLLA with other experience replay mechanisms, DRLLA with CER can achieve higher throughput and lower BLER in various time-varying scenarios, and be more robust to different CQI feedback delays and CQI reporting periods. Furthermore, with the SC -e-greedy policy, DRLLA can capture better trade-offs between the link transmission quality and the MCS switching overhead compared with other baselines.
引用
收藏
页码:7364 / 7379
页数:16
相关论文
共 50 条
  • [31] Deep Reinforcement Learning based Rate Adaptation for Wi-Fi Networks
    Lin, Wenhai
    Guo, Ziyang
    Liu, Peng
    Du, Mingjun
    Sun, Xinghua
    Yang, Xun
    2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [32] Link Adaptation Algorithms for Channel Estimation Error Mitigation in LTE systems
    Dai, Huiling
    Wang, Ying
    Zhang, Ke
    Shi, Cong
    2012 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2012,
  • [33] Deep reinforcement learning for swarm systems
    Hüttenrauch, Maximilian
    Oic, Adrian
    Neumann, Gerhard
    Journal of Machine Learning Research, 2019, 20
  • [34] Efficient Online Hyperparameter Adaptation for Deep Reinforcement Learning
    Zhou, Yinda
    Liu, Weiming
    Li, Bin
    APPLICATIONS OF EVOLUTIONARY COMPUTATION, EVOAPPLICATIONS 2019, 2019, 11454 : 141 - 155
  • [35] Deep Reinforcement Learning Boosted Partial Domain Adaptation
    Wu, Keyu
    Wu, Min
    Yang, Jianfei
    Chen, Zhenghua
    Li, Zhengguo
    Li, Xiaoli
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3192 - 3199
  • [36] Deep Reinforcement Learning for Swarm Systems
    Huettenrauch, Maximilian
    Sosic, Adrian
    Neumann, Gerhard
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20
  • [37] Spatial Modulation Link Adaptation: a Deep Learning Approach
    Tato, Anxo
    Mosquera, Carlos
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1801 - 1805
  • [38] Deep Reinforcement Learning-based CIO and Energy Control for LTE Mobility Load Balancing
    Alsuhli, Ghada
    Ismail, Hassan A.
    Alansary, Kareem
    Rumman, Mahmoud
    Mohamed, Mostafa
    Seddik, Karim G.
    2021 IEEE 18TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2021,
  • [39] Joint power allocation and MCS selection for energy-efficient link adaptation: A deep reinforcement learning approach
    Parsa, Ali
    Moghim, Neda
    Salavati, Pouyan
    COMPUTER NETWORKS, 2022, 218
  • [40] SmartLA: Reinforcement learning-based link adaptation for high throughput wireless access networks
    Karmakar, Raja
    Chattopadhyay, Samiran
    Chakraborty, Sandip
    COMPUTER COMMUNICATIONS, 2017, 110 : 1 - 25