Service migration for mobile edge computing based on partially observable Markov decision processes

被引:11
|
作者
Chen, Wen [1 ]
Chen, Yuhu [1 ]
Liu, Jiawei [1 ]
机构
[1] Donghua Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
关键词
Mobile edge computing; Multi-user service migration; Deep reinforcement learning; Partially observable Markov decision processes; VIRTUAL MACHINE;
D O I
10.1016/j.compeleceng.2022.108552
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the continuous development of mobile edge computing, people are more willing to offload tasks to edge servers that are closer to users than cloud services for a better user experience. Due to problems such as user mobility and limited coverage of edge servers, how to ensure service quality and prevent service interruption is a problem we have to consider. This article explores the problem of service migration to decide when, where, and how to migrate an ongoing service from the current edge server to the target edge server. At the same time, we will consider the premise that edge servers can only obtain partial user information, and model the service migration problem as a partially observable Markov decision process. To minimize user delay and system energy consumption, we propose a service migration decision algorithm based on Deep recurrent Q-learning (DRQNSM). Multiple experiments show that our algorithm has a better performance than some classical reinforcement learning algorithms.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] STRUCTURAL RESULTS FOR PARTIALLY OBSERVABLE MARKOV DECISION-PROCESSES
    ALBRIGHT, SC
    OPERATIONS RESEARCH, 1979, 27 (05) : 1041 - 1053
  • [32] MEDICAL TREATMENTS USING PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES
    Goulionis, John E.
    JP JOURNAL OF BIOSTATISTICS, 2009, 3 (02) : 77 - 97
  • [33] Qualitative Analysis of Partially-Observable Markov Decision Processes
    Chatterjee, Krishnendu
    Doyen, Laurent
    Henzinger, Thomas A.
    MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE 2010, 2010, 6281 : 258 - 269
  • [34] Equivalence Relations in Fully and Partially Observable Markov Decision Processes
    Castro, Pablo Samuel
    Panangaden, Prakash
    Precup, Doina
    21ST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-09), PROCEEDINGS, 2009, : 1653 - 1658
  • [35] Recursively-Constrained Partially Observable Markov Decision Processes
    Ho, Qi Heng
    Becker, Tyler
    Kraske, Benjamin
    Laouar, Zakariya
    Feather, Martin S.
    Rossi, Federico
    Lahijanian, Morteza
    Sunberg, Zachary
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2024, 244 : 1658 - 1680
  • [36] A Fast Approximation Method for Partially Observable Markov Decision Processes
    LIU Bingbing
    KANG Yu
    JIANG Xiaofeng
    QIN Jiahu
    JournalofSystemsScience&Complexity, 2018, 31 (06) : 1423 - 1436
  • [37] Active Chemical Sensing With Partially Observable Markov Decision Processes
    Gosangi, Rakesh
    Gutierrez-Osuna, Ricardo
    OLFACTION AND ELECTRONIC NOSE, PROCEEDINGS, 2009, 1137 : 562 - 565
  • [38] Stochastic optimization of controlled partially observable Markov decision processes
    Bartlett, PL
    Baxter, J
    PROCEEDINGS OF THE 39TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-5, 2000, : 124 - 129
  • [39] Reinforcement learning algorithm for partially observable Markov decision processes
    Wang, Xue-Ning
    He, Han-Gen
    Xu, Xin
    Kongzhi yu Juece/Control and Decision, 2004, 19 (11): : 1263 - 1266
  • [40] Partially Observable Markov Decision Processes and Performance Sensitivity Analysis
    Li, Yanjie
    Yin, Baoqun
    Xi, Hongsheng
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (06): : 1645 - 1651