Power Allocation in Dual Connectivity Networks Based on Actor-Critic Deep Reinforcement Learning

被引:0
|
作者
Moein, Elham [1 ]
Hasibi, Ramin [1 ]
Shokri, Matin [2 ]
Rasti, Mehdi [1 ]
机构
[1] Amirkabir Univ Technol, Dept Comp Engn & Informat Technol, Tehran, Iran
[2] KN Toosi Univ Technol, Dept Elect & Comp Engn, Tehran, Iran
关键词
Dual connectivity; heterogeneous networks; power allocation; deep reinforcement learning;
D O I
10.23919/wiopt47501.2019.9144094
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Dual Connectivity (DC) has been proposed by Third Generation Partnership Project (3GPP), in order to address the small coverage areas and outage of users and improve the mobility robustness and rate of users in Heterogeneous Networks (HetNets). In the HetNet with DC, each user is assigned a Macro eNode Base Station (MeNB) and a Small eNode Base Station (SeNB) and transmits data to both eNode Base Stations (eNBs), simultaneously. In this paper, we present a power splitting scheme for the HetNet with DC; to maximize the total rate of the users while not exceeding the maximum transmit power of each user. In our proposed power splitting scheme, a Deep Reinforcement Learning (DRL) approach is taken based on the actor-critic model on continuous state-action spaces. Simulation results demonstrate that our power splitting scheme outperforms the baseline approaches in terms of total rate of users and fairness.
引用
收藏
页码:170 / 177
页数:8
相关论文
共 50 条
  • [31] Reinforcement actor-critic learning as a rehearsal in MicroRTS
    Manandhar, Shiron
    Banerjee, Bikramjit
    [J]. Knowledge Engineering Review, 2024, 39
  • [32] Evaluating Correctness of Reinforcement Learning based on Actor-Critic Algorithm
    Kim, Youngjae
    Hussain, Manzoor
    Suh, Jae-Won
    Hong, Jang-Eui
    [J]. 2022 THIRTEENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN), 2022, : 320 - 325
  • [33] Swarm Reinforcement Learning Method Based on an Actor-Critic Method
    Iima, Hitoshi
    Kuroe, Yasuaki
    [J]. SIMULATED EVOLUTION AND LEARNING, 2010, 6457 : 279 - 288
  • [34] Manipulator Motion Planning based on Actor-Critic Reinforcement Learning
    Li, Qiang
    Nie, Jun
    Wang, Haixia
    Lu, Xiao
    Song, Shibin
    [J]. 2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 4248 - 4254
  • [35] Multi-actor mechanism for actor-critic reinforcement learning
    Li, Lin
    Li, Yuze
    Wei, Wei
    Zhang, Yujia
    Liang, Jiye
    [J]. INFORMATION SCIENCES, 2023, 647
  • [36] Proactive Content Caching Based on Actor-Critic Reinforcement Learning for Mobile Edge Networks
    Jiang, Wei
    Feng, Daquan
    Sun, Yao
    Feng, Gang
    Wang, Zhenzhong
    Xia, Xiang-Gen
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (02) : 1239 - 1252
  • [37] Dynamic power management of an embedded sensor network based on actor-critic reinforcement based learning
    Sridhar, Prasanna
    Nanayakkara, Thrishantha
    Madni, Asad M.
    Jamshidi, Mo
    [J]. 2007 THIRD INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION FOR SUSTAINABILITY, 2007, : 71 - +
  • [38] Symmetric actor-critic deep reinforcement learning for cascade quadrotor flight control
    Han, Haoran
    Cheng, Jian
    Xi, Zhilong
    Lv, Maolong
    [J]. NEUROCOMPUTING, 2023, 559
  • [39] Dynamic spectrum access and sharing through actor-critic deep reinforcement learning
    Liang Dong
    Yuchen Qian
    Yuan Xing
    [J]. EURASIP Journal on Wireless Communications and Networking, 2022
  • [40] Automatic collective motion tuning using actor-critic deep reinforcement learning
    Abpeikar, Shadi
    Kasmarik, Kathryn
    Garratt, Matthew
    Hunjet, Robert
    Khan, Md Mohiuddin
    Qiu, Huanneng
    [J]. SWARM AND EVOLUTIONARY COMPUTATION, 2022, 72