Deep reinforcement learning based mobility management in a MEC-Enabled cellular IoT network

被引:0
|
作者
Kabir, Homayun [1 ]
Tham, Mau-Luen [1 ]
Chang, Yoong Choon [1 ]
Chow, Chee-Onn [2 ]
机构
[1] Univ Tunku Abdul Rahman, Lee Kong Chian Fac Engn & Sci, Dept Elect & Elect Engn, Sungai Long Campus, Selangor 43000, Malaysia
[2] Univ Malaya, Fac Engn, Dept Elect Engn, Malaya 50603, Malaysia
关键词
Handover management; Edge computing; CIoT; Deep reinforcement learning; Parametrized deep Q network; EDGE; HANDOVER; ALLOCATION; INTERNET;
D O I
10.1016/j.pmcj.2024.101987
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) has paved the way for new Cellular Internet of Things (CIoT) paradigm, where resource constrained CIoT Devices (CDs) can offload tasks to a computing server located at either a Base Station (BS) or an edge node. For CDs moving in high speed, seamless mobility is crucial during the MEC service migration from one base station (BS) to another. In this paper, we investigate the problem of joint power allocation and Handover (HO) management in a MEC network with a Deep Reinforcement Learning (DRL) approach. To handle the hybrid action space (continuous: power allocation and discrete: HO decision), we leverage Parameterized Deep Q-Network (P-DQN) to learn the near-optimal solution. Simulation results illustrate that the proposed algorithm (P-DQN) outperforms the conventional approaches, such as the nearest BS +random power and random BS +random power, in terms of reward, HO cost, and total power consumption. According to simulation results, HO occurs almost in the edge point of two BS, which means the HO is almost perfectly managed. In addition, the total power consumption is around 0.151 watts in P-DQN while it is about 0.75 watts in nearest BS +random power and random BS +random power.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Privacy-Preserving MEC-Enabled Contextual Online Learning via SDN for Service Selection in IoT
    Mu, Difan
    Zhou, Pan
    Li, Qinghua
    Li, Ruixuan
    Xu, Jie
    2019 IEEE 16TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2019), 2019, : 290 - 298
  • [32] Joint User Association and Resource Allocation Optimization for MEC-Enabled IoT Networks
    Sun, Yaping
    Xu, Jie
    Cui, Shuguang
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4884 - 4889
  • [33] Trusted Collaboration for MEC-Enabled VR Video Streaming: A Multi-Agent Reinforcement Learning Approach
    Xu, Yueqiang
    Zhang, Heli
    Li, Xi
    Yu, F. Richard
    Leung, Victor C. M.
    Ji, Hong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (09) : 12167 - 12180
  • [34] Model-Driven Dependability Assessment of Microservice Chains in MEC-Enabled IoT
    Bai, Jing
    Chang, Xiaolin
    Machida, Fumio
    Trivedi, Kishor S. S.
    Li, Yaru
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (04) : 2769 - 2785
  • [35] Experimental Evaluation of Modern TCP Variants in MEC-enabled Cellular Networks
    Wang, Zhi
    Tan, Yiming
    Zhang, Xing
    2018 10TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2018,
  • [36] Location Privacy-Aware Offloading for MEC-Enabled IoT: Optimality and Heuristics
    Hua, Wei
    Zhou, Ziyang
    Huang, Linyu
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (21) : 19270 - 19281
  • [37] Task Scheduling and Resource Management in MEC-Enabled Computing Networks
    Feng, Jie
    Zhang, Wenjing
    Liu, Lei
    Du, Jianbo
    Xiao, Ming
    Pei, Qingqi
    MOBILE NETWORKS AND MANAGEMENT, MONAMI 2021, 2022, 418 : 127 - 137
  • [38] dRG-MEC: Decentralized Reinforced Green Offloading for MEC-enabled Cloud Network
    Aftab, Asad
    Rehman, Semeen
    2023 19TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS, WIMOB, 2023, : 338 - 343
  • [39] Computation Offloading Based on Deep Reinforcement Learning for UAV-MEC Network
    Wan, Zheng
    Luo, Yuxuan
    Dong, Xiaogang
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT IV, 2024, 14490 : 265 - 276
  • [40] Cache Sharing in UAV-Enabled Cellular Network: A Deep Reinforcement Learning-Based Approach
    Muslih, Hamidullah
    Kazmi, S. M. Ahsan
    Mazzara, Manuel
    Baye, Gaspard
    IEEE ACCESS, 2024, 12 : 43422 - 43435