Deep Reinforcement Learning Empowers Wireless Powered Mobile Edge Computing: Towards Energy-Aware Online Offloading

被引:3
|
作者
Jiao, Xianlong [1 ]
Wang, Yating [1 ]
Guo, Songtao [1 ]
Zhang, Hong [2 ]
Dai, Haipeng [3 ]
Li, Mingyan [1 ]
Zhou, Pengzhan [1 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[2] Chongqing Jiaotong Univ, Sch Informat Sci & Engn, Chongqing 400074, Peoples R China
[3] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210024, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Wireless powered mobile edge computing; deep reinforcement learning; online offloading decision; charging resource allocation; COMPUTATION RATE MAXIMIZATION; OPTIMIZATION; ALLOCATION; INTERNET;
D O I
10.1109/TCOMM.2023.3283792
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep integration of wireless power transmission and mobile edge computing (MEC) promotes wireless powered MEC to become a new research hotspot in the field of Internet of Things. In this paper, we focus on the joint optimization problem of online offloading decision and charging resource allocation for minimizing task accomplishing time in dynamic time-varying wireless channel scenarios. The optimal solution involves addressing a mixed integer programming problem in real time, which is proved to be NP-hard, and imposes nontrivial challenges to design with conventional optimization methods. To efficiently address this problem, we leverage the deep reinforcement learning (DRL) technology to propose an energy-aware online offloading algorithm called EAOO. EAOO algorithm learns empirically the online offloading decision policies via a well-designed DRL framework, and adopts the feasible solution region analysis method to implement the charging resource allocation. We further propose a novel feasible decision vector generation method, and incorporate the crossover and mutation technology to expand the offloading vector search space with the provable feasibility guarantee. Extensive experimental results show that, our EAOO algorithm outperforms existing baseline algorithms, and achieves near-optimal performance with low CPU execution latency, which well satisfies the practical requirements of real-time and efficiency.
引用
收藏
页码:5214 / 5227
页数:14
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks
    Huang, Liang
    Bi, Suzhi
    Zhang, Ying-Jun Angela
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (11) : 2581 - 2593
  • [2] Augmented Deep Reinforcement Learning for Online Energy Minimization of Wireless Powered Mobile Edge Computing
    Chen, Xiaojing
    Dai, Weiheng
    Ni, Wei
    Wang, Xin
    Zhang, Shunqing
    Xu, Shugong
    Sun, Yanzan
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (05) : 2698 - 2710
  • [3] Deep Reinforcement Learning for Online Latency Aware Workload Offloading in Mobile Edge Computing
    Akhavan, Zeinab
    Esmaeili, Mona
    Badnava, Babak
    Yousefi, Mohammad
    Sun, Xiang
    Devetsikiotis, Michael
    Zarkesh-Ha, Payman
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2218 - 2223
  • [4] Energy-Delay Tradeoff for Online Offloading Based on Deep Reinforcement Learning in Wireless Powered Mobile-Edge Computing Networks
    王中林
    曹涵凯
    赵萍
    饶为
    [J]. Journal of Donghua University(English Edition), 2020, 37 (06) : 498 - 503
  • [5] Energy-Aware Online Task Offloading and Resource Allocation for Mobile Edge Computing
    Liu, Yu
    Mao, Yingling
    Shang, Xiaojun
    Liu, Zhenhua
    Yang, Yuanyuan
    [J]. 2023 IEEE 43RD INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS, 2023, : 339 - 349
  • [6] Mobile-Aware Online Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing Networks
    Li, Yuting
    Liu, Yitong
    Liu, Xingcheng
    Tu, Qiang
    Xie, Yi
    [J]. 2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [7] Energy-Aware Multi-Server Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Naderializadeh, Navid
    Hashemi, Morteza
    [J]. CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 383 - 387
  • [8] A Deep Reinforcement Learning Approach for Online Computation Offloading in Mobile Edge Computing
    Zhang, Yameng
    Liu, Tong
    Zhu, Yanmin
    Yang, Yuanyuan
    [J]. 2020 IEEE/ACM 28TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 2020,
  • [9] A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing
    Wang, Qing
    Tan, Wenan
    Qin, Xiaofan
    [J]. HUMAN CENTERED COMPUTING, 2019, 11956 : 419 - 430
  • [10] Online Learning for Distributed Computation Offloading in Wireless Powered Mobile Edge Computing Networks
    Wang, Xiaojie
    Ning, Zhaolong
    Guo, Lei
    Guo, Song
    Gao, Xinbo
    Wang, Guoyin
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (08) : 1841 - 1855