FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks

被引:0
|
作者
Noman, Hafiz Muhammad Fahad [1 ]
Dimyati, Kaharudin [1 ]
Noordin, Kamarul Ariffin [1 ]
Hanafi, Effariza [1 ]
Abdrabou, Atef [2 ]
机构
[1] Univ Malaya, Fac Engn, Dept Elect Engn, Adv Commun Res & Innovat ACRI, Kuala Lumpur 50603, Malaysia
[2] UAE Univ, Coll Engn, Elect & Commun Engn Dept, Al Ain, U Arab Emirates
来源
IEEE ACCESS | 2024年 / 12卷
关键词
6G; device-to-device communications; double deep Q-network (DDQN); energy efficiency; federated-deep reinforcement learning (F-DRL); resource allocation; POWER-CONTROL; OPTIMIZATION; SELECTION;
D O I
10.1109/ACCESS.2024.3434619
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive connectivity, limited network resources, and interference management are the significant challenges to deploying multiple device-to-device pairs (DDPs) without disrupting cellular users. Hence, intelligent resource management and power control are indispensable for alleviating interference among DDPs to optimize overall system performance and global energy efficiency. Considering this, we present a Federated DRL-based method for energy-efficient resource management in a D2D-assisted heterogeneous network (HetNet). We formulate a joint optimization problem of power control and channel allocation to maximize the system's energy efficiency under QoS constraints for cellular user equipment (CUEs) and DDPs. The proposed scheme employs federated learning for a decentralized training paradigm to address user privacy, and a double-deep Q-network (DDQN) is used for intelligent resource management. The proposed DDQN method uses two separate Q-networks for action selection and target estimation to rationalize the transmit power and dynamic channel selection in which DDPs as agents could reuse the uplink channels of CUEs. Simulation results depict that the proposed method improves the overall system energy efficiency by 41.52% and achieves a better sum rate of 11.65%, 24.78%, and 47.29% than multi-agent actor-critic (MAAC), distributed deep-deterministic policy gradient (D3PG), and deep Q network (DQN) scheduling, respectively. Moreover, the proposed scheme achieves a 5.88%, 15.79%, and 27.27% reduction in cellular outage probability compared to MAAC, D3PG, and DQN scheduling, respectively, which makes it a robust solution for energy-efficient resource allocation in D2D-assisted 6G networks.
引用
收藏
页码:109775 / 109792
页数:18
相关论文
共 50 条
  • [21] Joint computation offloading and resource allocation strategy for D2D-assisted and NOMA-empowered MEC systems
    Umar Ajaib Khan
    Rong Chai
    Shabeer Ahmad
    Waleeed Almughalles
    EURASIP Journal on Wireless Communications and Networking, 2023
  • [22] Joint access and backhaul resource allocation for D2D-assisted dense mmWave cellular networks
    Dai, Xiangwen
    Gui, Jinsong
    COMPUTER NETWORKS, 2020, 183 (183)
  • [23] Energy Minimization in D2D-Assisted Cache-Enabled Internet of Things: A Deep Reinforcement Learning Approach
    Tang, Jie
    Tang, Hengbin
    Zhang, Xiuyin
    Cumanan, Kanapathippillai
    Chen, Gaojie
    Wong, Kai-Kit
    Chambers, Jonathon
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (08) : 5412 - 5423
  • [24] Resource Allocation Strategy for D2D-Assisted Edge Computing System With Hybrid Energy Harvesting
    Chen, Jiafa
    Zhao, Yisheng
    Xu, Zhimeng
    Zheng, Haifeng
    IEEE ACCESS, 2020, 8 : 192643 - 192658
  • [25] DE-based resource allocation for D2D-assisted NOMA systems
    Jia, Jie
    Tian, Quanzhen
    Du, An
    Chen, Jian
    Wang, Xingwei
    SOFT COMPUTING, 2024, 28 (04) : 3071 - 3082
  • [26] DE-based resource allocation for D2D-assisted NOMA systems
    Jie Jia
    Quanzhen Tian
    An Du
    Jian Chen
    Xingwei Wang
    Soft Computing, 2024, 28 : 3071 - 3082
  • [27] Deep Reinforcement Learning Based Resource Allocation for D2D Communications Underlay Cellular Networks
    Yu, Seoyoung
    Lee, Jeong Woo
    SENSORS, 2022, 22 (23)
  • [28] Unsupervised Learning for D2D-Assisted Multicast Scheduling in mmWave Networks
    Chukhno, Nadezhda
    Chukhno, Olga
    Pizzi, Sara
    Molinaro, Antonella
    Iera, Antonio
    Araniti, Giuseppe
    2021 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 2021,
  • [29] Energy Minimization for D2D-Assisted Mobile Edge Computing Networks
    Kai, Yuan
    Wang, Junyuan
    Zhu, Huiling
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [30] Efficient Scheduling and Power Allocation for D2D-Assisted Wireless Caching Networks
    Zhang, Lin
    Xiao, Ming
    Wu, Gang
    Li, Shaoqian
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2016, 64 (06) : 2438 - 2452