FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks

被引:0
|
作者
Noman, Hafiz Muhammad Fahad [1 ]
Dimyati, Kaharudin [1 ]
Noordin, Kamarul Ariffin [1 ]
Hanafi, Effariza [1 ]
Abdrabou, Atef [2 ]
机构
[1] Univ Malaya, Fac Engn, Dept Elect Engn, Adv Commun Res & Innovat ACRI, Kuala Lumpur 50603, Malaysia
[2] UAE Univ, Coll Engn, Elect & Commun Engn Dept, Al Ain, U Arab Emirates
来源
IEEE ACCESS | 2024年 / 12卷
关键词
6G; device-to-device communications; double deep Q-network (DDQN); energy efficiency; federated-deep reinforcement learning (F-DRL); resource allocation; POWER-CONTROL; OPTIMIZATION; SELECTION;
D O I
10.1109/ACCESS.2024.3434619
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive connectivity, limited network resources, and interference management are the significant challenges to deploying multiple device-to-device pairs (DDPs) without disrupting cellular users. Hence, intelligent resource management and power control are indispensable for alleviating interference among DDPs to optimize overall system performance and global energy efficiency. Considering this, we present a Federated DRL-based method for energy-efficient resource management in a D2D-assisted heterogeneous network (HetNet). We formulate a joint optimization problem of power control and channel allocation to maximize the system's energy efficiency under QoS constraints for cellular user equipment (CUEs) and DDPs. The proposed scheme employs federated learning for a decentralized training paradigm to address user privacy, and a double-deep Q-network (DDQN) is used for intelligent resource management. The proposed DDQN method uses two separate Q-networks for action selection and target estimation to rationalize the transmit power and dynamic channel selection in which DDPs as agents could reuse the uplink channels of CUEs. Simulation results depict that the proposed method improves the overall system energy efficiency by 41.52% and achieves a better sum rate of 11.65%, 24.78%, and 47.29% than multi-agent actor-critic (MAAC), distributed deep-deterministic policy gradient (D3PG), and deep Q network (DQN) scheduling, respectively. Moreover, the proposed scheme achieves a 5.88%, 15.79%, and 27.27% reduction in cellular outage probability compared to MAAC, D3PG, and DQN scheduling, respectively, which makes it a robust solution for energy-efficient resource allocation in D2D-assisted 6G networks.
引用
收藏
页码:109775 / 109792
页数:18
相关论文
共 50 条
  • [41] Energy Efficient Downlink Resource Allocations for D2D-Assisted Cellular Networks With Mobile Edge Caching
    Liu, Yuanfei
    Wang, Ying
    Sun, Ruijin
    Meng, Sachula
    Su, Runcong
    IEEE ACCESS, 2019, 7 : 2053 - 2067
  • [42] Energy-Efficient Power Control and Resource Allocation Based on Deep Reinforcement Learning for D2D Communications in Cellular Networks
    Alenezi, Sami
    Luo, Chunbo
    Min, Geyong
    20TH INT CONF ON UBIQUITOUS COMP AND COMMUNICAT (IUCC) / 20TH INT CONF ON COMP AND INFORMATION TECHNOLOGY (CIT) / 4TH INT CONF ON DATA SCIENCE AND COMPUTATIONAL INTELLIGENCE (DSCI) / 11TH INT CONF ON SMART COMPUTING, NETWORKING, AND SERV (SMARTCNS), 2021, : 76 - 83
  • [43] Energy-Efficient Mode Selection and Resource Allocation for D2D-Enabled Heterogeneous Networks: A Deep Reinforcement Learning Approach
    Zhang, Tao
    Zhu, Kun
    Wang, Junhua
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (02) : 1175 - 1187
  • [44] D2D Resource Allocation Based on Reinforcement Learning and QoS
    Kuo, Fang-Chang
    Wang, Hwang-Cheng
    Tseng, Chih-Cheng
    Wu, Jung-Shyr
    Xu, Jia-Hao
    Chang, Jieh-Ren
    MOBILE NETWORKS & APPLICATIONS, 2023, 28 (03): : 1076 - 1095
  • [45] D2D Resource Allocation Based on Reinforcement Learning and QoS
    Fang-Chang Kuo
    Hwang-Cheng Wang
    Chih-Cheng Tseng
    Jung-Shyr Wu
    Jia-Hao Xu
    Jieh-Ren Chang
    Mobile Networks and Applications, 2023, 28 : 1076 - 1095
  • [46] Deep Reinforcement Learning Empowered Multiple UAVs-assisted Caching and Offloading Optimization in D2D Wireless Networks
    Lin, Na
    Qin, Hongzhi
    Shi, Junling
    Zhao, Liang
    PROCEEDINGS OF THE 19TH ACM INTERNATIONAL CONFERENCE ON COMPUTING FRONTIERS 2022 (CF 2022), 2022, : 150 - 158
  • [47] D2D-Assisted Multi-User Cooperative Partial Offloading in MEC Based on Deep Reinforcement Learning
    Guan, Xin
    Lv, Tiejun
    Lin, Zhipeng
    Huang, Pingmu
    Zeng, Jie
    SENSORS, 2022, 22 (18)
  • [48] Deep Multi-Agent Reinforcement Learning for Resource Allocation in D2D Communication Underlaying Cellular Networks
    Zhang, Xu
    Lin, Ziqi
    Ding, Beichen
    Gu, Bo
    Han, Yu
    APNOMS 2020: 2020 21ST ASIA-PACIFIC NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM (APNOMS), 2020, : 55 - 60
  • [49] Hybrid Deep Reinforcement Learning-Based Task Offloading for D2D-Assisted Cloud-Edge-Device Collaborative Networks
    Fan, Wenhao
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 13455 - 13471
  • [50] Improving the Spectral Efficiency in Dense Heterogeneous Networks Using D2D-Assisted eICIC
    Elshatshat, Mohamed A.
    Papadakis, Stefanos
    Angelakis, Vangelis
    2018 IEEE 23RD INTERNATIONAL WORKSHOP ON COMPUTER AIDED MODELING AND DESIGN OF COMMUNICATION LINKS AND NETWORKS (CAMAD), 2018, : 32 - 37