Age-Based Scheduling for Mobile Edge Computing: A Deep Reinforcement Learning Approach

被引:0
|
作者
He, Xingqiu [1 ,2 ]
You, Chaoqun [1 ,2 ]
Quek, Tony Q. S. [3 ,4 ]
机构
[1] Fudan Univ, Intelligent Networking & Comp Res Ctr, Shanghai 200437, Peoples R China
[2] Fudan Univ, Sch Comp Sci, Shanghai 200437, Peoples R China
[3] Singapore Univ Technol & Design, Singapore 487372, Singapore
[4] Yonsei Univ, Yonsei Frontier Lab, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Task analysis; Heuristic algorithms; System dynamics; Measurement; Data processing; Minimization; Servers; Age of information; mobile edge computing; post-decision state; deep reinforcement learning; RESOURCE-ALLOCATION; STATUS UPDATE; PEAK AGE; INFORMATION; COMPUTATION; OPTIMIZATION; NETWORKS; MANAGEMENT; TRADEOFF; QUEUE;
D O I
10.1109/TMC.2024.3370101
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of Mobile Edge Computing (MEC), various real-time applications have been deployed to benefit people's daily lives. The performance of these applications relies heavily on the freshness of collected environmental information, which can be quantified by its Age of Information (AoI). In the traditional definition of AoI, it is assumed that the status information can be actively sampled and directly used. However, for many MEC-enabled applications, the desired status information is updated in an event-driven manner and necessitates data processing. To better serve these applications, we propose a new definition of AoI and, based on the redefined AoI, we formulate an online AoI minimization problem for MEC systems. Notably, the problem can be interpreted as a Markov Decision Process (MDP), thus enabling its solution through Reinforcement Learning (RL) algorithms. Nevertheless, the traditional RL algorithms are designed for MDPs with completely unknown system dynamics and hence usually suffer long convergence times. To accelerate the learning process, we introduce Post-Decision States (PDSs) to exploit the partial knowledge of the system's dynamics. We also combine PDSs with deep RL to further improve the algorithm's applicability, scalability, and robustness. Numerical results demonstrate that our algorithm outperforms the benchmarks under various scenarios.
引用
收藏
页码:9881 / 9897
页数:17
相关论文
共 50 条
  • [31] Research on Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing
    Lu H.
    Gu C.
    Luo F.
    Ding W.
    Yang T.
    Zheng S.
    Gu, Chunhua (chgu@ecust.edu.cn), 1600, Science Press (57): : 1539 - 1554
  • [32] Deep reinforcement learning-based microservice selection in mobile edge computing
    Guo, Feiyan
    Tang, Bing
    Tang, Mingdong
    Liang, Wei
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2023, 26 (02): : 1319 - 1335
  • [33] Multiple Workflows Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing
    Gao, Yongqiang
    Wang, Yanping
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT I, 2022, 13155 : 476 - 493
  • [34] Deep Reinforcement Learning-Based Server Selection for Mobile Edge Computing
    Liu, Heting
    Cao, Guohong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (12) : 13351 - 13363
  • [35] Maritime mobile edge computing offloading method based on deep reinforcement learning
    Su X.
    Meng L.
    Zhou Y.
    Celimuge W.
    Tongxin Xuebao/Journal on Communications, 2022, 43 (10): : 133 - 145
  • [36] Task Offloading Optimization in Mobile Edge Computing based on Deep Reinforcement Learning
    Silva, Carlos
    Magaia, Naercio
    Grilo, Antonio
    PROCEEDINGS OF THE INT'L ACM CONFERENCE ON MODELING, ANALYSIS AND SIMULATION OF WIRELESS AND MOBILE SYSTEMS, MSWIM 2023, 2023, : 109 - 118
  • [37] Federated Learning for Online Resource Allocation in Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Zheng, Jingjing
    Li, Kai
    Mhaisen, Naram
    Ni, Wei
    Tovar, Eduardo
    Guizani, Mohsen
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [38] Reinforcement Learning-Based Mobile Edge Computing and Transmission Scheduling for Video Surveillance
    Yang Kunpeng
    Shan, Hangguan
    Sun, Tengxu
    Hu, Haoji
    Hu, Roland
    Wu, Yingxiao
    Yu, Lu
    Zhang, Zhaoyang
    Quek, Tony Q. S.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (02) : 1142 - 1156
  • [39] iRAF: A Deep Reinforcement Learning Approach for Collaborative Mobile Edge Computing IoT Networks
    Chen, Jienan
    Chen, Siyu
    Wang, Qi
    Cao, Bin
    Feng, Gang
    Hu, Jianhao
    IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (04): : 7011 - 7024
  • [40] Deep Reinforcement Learning Approach for UAV-Assisted Mobile Edge Computing Networks
    Hwang, Sangwon
    Park, Juseong
    Lee, Hoon
    Kim, Mintae
    Lee, Inkyu
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3839 - 3844