Towards cost-effective service migration in mobile edge: A Q-learning approach

被引:6
|
作者
Wang, Yang [1 ]
Cao, Shan [1 ]
Ren, Hongshuai [1 ]
Li, Jianjun [2 ]
Ye, Kejiang [1 ]
Xu, Chengzhong [3 ]
Chen, Xi [4 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[2] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Hangzhou, Peoples R China
[3] Univ Macau, Fac Sci & Technol, State Key Lab IoT Smart City, Macau, Peoples R China
[4] CAS Res Ctr Ecol & Environm Cent Asia, Urumqi, Peoples R China
基金
中国国家自然科学基金;
关键词
Mobile edge computing; Dynamic service migration; Reinforcement learning; Q-learning; Software-defined networking;
D O I
10.1016/j.jpdc.2020.08.008
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Service migration in mobile edge computing is a promising approach to improving the quality of service (QoS) for mobile users and reducing the network operational cost for service providers as well. However, these benefits are not free, coming at costs of bulk-data transfer, and likely service disruption, which could consequently increase the overall service costs. To gain the benefits of service migration while minimizing its cost across the edge nodes, in this paper, we leverage reinforcement learning (RL) method to design a cost-effective framework, called Mig-RL, for the service migration with a reduction of total service costs as a goal in a mobile edge environment. The Mig-RL leverages the infrastructure of edge network and deploys a migration agent through Q-leaming to learn the optimal policy with respect to the service migration status. We distinguish the Mig-RL from other existing works in several major aspects. First, we fully exploit the nature of this problem in a modest migration space, which allows us to constrain the number of service replicas whereby a defined state-action space could be effectively handled, as opposed to those methods that need to always approximate a huge state-action space for policy optimality. Second, we advocate a migration policy-base as a cache to save the learning process by retrieving the most effective policy whenever a similar migration pattern is encountered as time goes on. Finally, by exploiting the idea of software defined network, we also investigate the efficient implementation of Mig-RL in mobile edge network. Experimental results based on some real and synthesized access sequences show that Mig-RL, compared with the selected existing algorithms, can substantially minimize the service costs, and in the meantime, efficiently improve the QoS by adapting to the changes of mobile access patterns. (C) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页码:175 / 188
页数:14
相关论文
共 50 条
  • [1] Identifying optimally cost-effective dynamic treatment regimes with a Q-learning approach
    Illenberger, Nicholas
    Spieker, Andrew J.
    Mitra, Nandita
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES C-APPLIED STATISTICS, 2023, 72 (02) : 434 - 449
  • [2] Cost-Effective Federated Learning in Mobile Edge Networks
    Luo, Bing
    Li, Xiang
    Wang, Shiqiang
    Huang, Jianwei
    Tassiulas, Leandros
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) : 3606 - 3621
  • [3] Reinforcement learning for cost-effective IoT service caching at the edge
    Huang, Binbin
    Liu, Xiao
    Xiang, Yuanyuan
    Yu, Dongjin
    Deng, Shuiguang
    Wang, Shangguang
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2022, 168 : 120 - 136
  • [4] A Cost-effective Mobile Edge Computing Model
    Lin Qing
    Huang Yulei
    [J]. PROCEEDINGS OF THE 2018 2ND INTERNATIONAL CONFERENCE ON ECONOMIC DEVELOPMENT AND EDUCATION MANAGEMENT (ICEDEM 2018), 2018, 290 : 304 - 308
  • [5] A Fast Deep Q-learning Network Edge Cloud Migration Strategy for Vehicular Service
    Peng Jun
    Wang Chenglong
    Jiang Fu
    Gu Xin
    Mu Yueyue
    Liu Weirong
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2020, 42 (01) : 58 - 64
  • [6] Service migration in mobile edge computing: A deep reinforcement learning approach
    Wang, Hongman
    Li, Yingxue
    Zhou, Ao
    Guo, Yan
    Wang, Shangguang
    [J]. INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2023, 36 (01)
  • [7] An Efficient Initialization Approach of Q-learning for Mobile Robots
    Song, Yong
    Li, Yi-bin
    Li, Cai-hong
    Zhang, Gui-fang
    [J]. INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2012, 10 (01) : 166 - 172
  • [8] An efficient initialization approach of Q-learning for mobile robots
    Yong Song
    Yi-bin Li
    Cai-hong Li
    Gui-fang Zhang
    [J]. International Journal of Control, Automation and Systems, 2012, 10 : 166 - 172
  • [9] ECQ: An Energy-Efficient, Cost-Effective and Qos-Aware Method for Dynamic Service Migration in Mobile Edge Computing Systems
    Ahmed, Awder
    Azizi, Sadoon
    Zeebaree, Subhi R. M.
    [J]. WIRELESS PERSONAL COMMUNICATIONS, 2023, 133 (04) : 2467 - 2501
  • [10] ECQ: An Energy-Efficient, Cost-Effective and Qos-Aware Method for Dynamic Service Migration in Mobile Edge Computing Systems
    Awder Ahmed
    Sadoon Azizi
    Subhi R. M. Zeebaree
    [J]. Wireless Personal Communications, 2023, 133 : 2467 - 2501