Deep Reinforcement Learning for Energy-Efficient Computation Offloading in Mobile-Edge Computing

被引:135
|
作者
Zhou, Huan [1 ]
Jiang, Kai [1 ]
Liu, Xuxun [2 ]
Li, Xiuhua [3 ,4 ]
Leung, Victor C. M. [5 ,6 ]
机构
[1] China Three Gorges Univ, Coll Comp & Informat Technol, Yichang 443002, Peoples R China
[2] South China Univ Technol, Res Ctr Multimedia Informat Secur Detect & Intell, Guangzhou 510641, Peoples R China
[3] Chongqing Univ, Sch Big Data & Software Engn, Chongqing, Peoples R China
[4] Chongqing Univ, Key Lab Dependable Serv Comp Cyber Phys Soc, Minist Educ, Chongqing 401331, Peoples R China
[5] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[6] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
来源
IEEE INTERNET OF THINGS JOURNAL | 2022年 / 9卷 / 02期
基金
中国国家自然科学基金;
关键词
Computation offloading; energy consumptions; mobile-edge computing (MEC); reinforcement learning (RL); resource allocation; RESOURCE-ALLOCATION; REVENUE MAXIMIZATION; NETWORKING; CLOUD;
D O I
10.1109/JIOT.2021.3091142
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile-edge computing (MEC) has emerged as a promising computing paradigm in the 5G architecture, which can empower user equipments (UEs) with computation and energy resources offered by migrating workloads from UEs to the nearby MEC servers. Although the issues of computation offloading and resource allocation in MEC have been studied with different optimization objectives, they mainly focus on facilitating the performance in the quasistatic system, and seldomly consider time-varying system conditions in the time domain. In this article, we investigate the joint optimization of computation offloading and resource allocation in a dynamic multiuser MEC system. Our objective is to minimize the energy consumption of the entire MEC system, by considering the delay constraint as well as the uncertain resource requirements of heterogeneous computation tasks. We formulate the problem as a mixed-integer nonlinear programming (MINLP) problem, and propose a value iteration-based reinforcement learning (RL) method, named Q-Learning, to determine the joint policy of computation offloading and resource allocation. To avoid the curse of dimensionality, we further propose a double deep Q network (DDQN)-based method, which can efficiently approximate the value function of Q-learning. The simulation results demonstrate that the proposed methods significantly outperform other baseline methods in different scenarios, except the exhaustion method. Especially, the proposed DDQN-based method achieves very close performance with the exhaustion method, and can significantly reduce the average of 20%, 35%, and 53% energy consumption compared with offloading decision, local first method, and offloading first method, respectively, when the number of UEs is 5.
引用
收藏
页码:1517 / 1530
页数:14
相关论文
共 50 条
  • [1] Delay-Aware and Energy-Efficient Computation Offloading in Mobile-Edge Computing Using Deep Reinforcement Learning
    Ale, Laha
    Zhang, Ning
    Fang, Xiaojie
    Chen, Xianfu
    Wu, Shaohua
    Li, Longzhuang
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (03) : 881 - 892
  • [2] Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading
    You, Changsheng
    Huang, Kaibin
    Chae, Hyukjin
    Kim, Byoung-Hoon
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2017, 16 (03) : 1397 - 1411
  • [3] Advanced Energy-Efficient Computation Offloading Using Deep Reinforcement Learning in MTC Edge Computing
    Khan, Israr
    Tao, Xiaofeng
    Rahman, G. M. Shafiqur
    Rehman, Waheed Ur
    Salam, Tabinda
    [J]. IEEE ACCESS, 2020, 8 : 82867 - 82875
  • [4] Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks
    Huang, Liang
    Bi, Suzhi
    Zhang, Ying-Jun Angela
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (11) : 2581 - 2593
  • [5] Asynchronous Mobile-Edge Computation Offloading: Energy-Efficient Resource Management
    You, Changsheng
    Zeng, Yong
    Zhang, Rui
    Huang, Kaibin
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2018, 17 (11) : 7590 - 7605
  • [6] Energy-Efficient Mobile-Edge Computation Offloading for Applications with Shared Data
    He, Xiangyu
    Xing, Hong
    Chen, Yue
    Nallanathan, Arumugam
    [J]. 2018 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2018,
  • [7] Discontinuous Computation Offloading for Energy-Efficient Mobile Edge Computing
    Merluzzi, Mattia
    di Pietro, Nicola
    Di Lorenzo, Paolo
    Strinati, Emilio Calvanese
    Barbarossa, Sergio
    [J]. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2022, 6 (02): : 1242 - 1257
  • [8] Energy-Efficient Mobile-Edge Computation Offloading over Multiple Fading Blocks
    Fan, Rongfei
    Li, Fudong
    Jin, Song
    Wang, Gongpu
    Jiang, Hai
    Wu, Shaohua
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [9] Deep reinforcement learning for computation offloading in mobile edge computing environment
    Chen, Miaojiang
    Wang, Tian
    Zhang, Shaobo
    Liu, Anfeng
    [J]. COMPUTER COMMUNICATIONS, 2021, 175 : 1 - 12
  • [10] Lyapunov-Guided Deep Reinforcement Learning for Stable Online Computation Offloading in Mobile-Edge Computing Networks
    Bi, Suzhi
    Huang, Liang
    Wang, Hui
    Zhang, Ying-Jun Angela
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (11) : 7519 - 7537