Edge QoE: Computation Offloading With Deep Reinforcement Learning for Internet of Things

被引:120
|
作者
Lu, Haodong [1 ]
He, Xiaoming [2 ]
Du, Miao [1 ]
Ruan, Xiukai [3 ]
Sun, Yanfei [4 ,5 ]
Wang, Kun [6 ]
机构
[1] Nanjing Univ Posts & Telecommun, Coll Internet Things, Nanjing 210003, Peoples R China
[2] Hohai Univ, Coll Comp & Informat, Nanjing 210003, Peoples R China
[3] Wenzhou Univ, Natl Local Joint Engn Lab Digitalized Elect Desig, Wenzhou 325035, Peoples R China
[4] Nanjing Univ Posts & Telecommun, Sch Automat, Nanjing 210003, Peoples R China
[5] Nanjing Univ Posts & Telecommun, Sch Artificial Intelligence, Nanjing 210003, Peoples R China
[6] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA 90095 USA
来源
IEEE INTERNET OF THINGS JOURNAL | 2020年 / 7卷 / 10期
基金
中国国家自然科学基金;
关键词
Computation offloading; deep reinforcement learning (DRL); edge; Internet of Things (IoT); Quality of Experience (QoE); RESOURCE-ALLOCATION; POWER;
D O I
10.1109/JIOT.2020.2981557
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In edge-enabled Internet of Things (IoT), computation offloading service is expected to offer users with better Quality of Experience (QoE) than traditional IoT. Unfortunately, the growing multiple tasks from users are occuring with the emergence of the IoT environment. Meanwhile, the current computation offloading with QoE is solved by deep reinforcement learning (DRL) with the issue of instability and slow convergence. Therefore, improving the QoE in edge-enabled IoT is still the ultimate challenge. In this article, to enhance the QoE, we propose a new QoE model to study the computation offloading. Specifically, the emerged QoE model can capture three influential elements: 1) service latency determined by local computing latency and transmission latency; 2) energy consumption according to local calculation and transmission consumption; and 3) task success rate based on the coding error probability. Moreover, we improve the deep deterministic policy gradients (DDPG) algorithm and propose a algorithm named the double-dueling-deterministic policy gradients (D(3)PG) based on the proposed model. Specifically, the actor network highly relies on the critic network, which makes the performance of the DDPG sensitive to the critic and thus leads to poor stability and slow convergence in the computation offloading process. To solve this, we redesign the critic network by using Double Q-learning and Dueling networks. Extensive experiments verify the better stability and faster convergence of our proposed algorithm than existing methods. In addition, experiments also indicate that our proposed algorithm can improve the QoE performance.
引用
收藏
页码:9255 / 9265
页数:11
相关论文
共 50 条
  • [21] Deep Reinforcement Learning-Based Computation Offloading in Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Min, Geyong
    Duan, Hancong
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [22] Deep reinforcement learning for the computation offloading in MIMO-based Edge Computing
    Sadiki, Abdeladim
    Bentahar, Jamal
    Dssouli, Rachida
    En-Nouaary, Abdeslam
    Otrok, Hadi
    [J]. AD HOC NETWORKS, 2023, 141
  • [23] A Deep Reinforcement Learning Approach for Online Computation Offloading in Mobile Edge Computing
    Zhang, Yameng
    Liu, Tong
    Zhu, Yanmin
    Yang, Yuanyuan
    [J]. 2020 IEEE/ACM 28TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 2020,
  • [24] A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing
    Wang, Qing
    Tan, Wenan
    Qin, Xiaofan
    [J]. HUMAN CENTERED COMPUTING, 2019, 11956 : 419 - 430
  • [25] A Distributed Computation Offloading Strategy for Edge Computing Based on Deep Reinforcement Learning
    Lai, Hongyang
    Yang, Zhuocheng
    Li, Jinhao
    Wu, Celimuge
    Bao, Wugedele
    [J]. MOBILE NETWORKS AND MANAGEMENT, MONAMI 2021, 2022, 418 : 73 - 86
  • [26] QoE-Based Cooperative Task Offloading with Deep Reinforcement Learning in Mobile Edge Networks
    He, Xiaoming
    Lu, Haodong
    Huang, Huawei
    Mao, Yingchi
    Wang, Kun
    Guo, Song
    [J]. IEEE WIRELESS COMMUNICATIONS, 2020, 27 (03) : 111 - 117
  • [27] Value-based multi-agent deep reinforcement learning for collaborative computation offloading in internet of things networks
    Li, Han
    Meng, Shunmei
    Shang, Jing
    Huang, Anqi
    Cai, Zhicheng
    [J]. WIRELESS NETWORKS, 2023, 8 (6915-6928) : 6915 - 6928
  • [28] Dual-Q network deep reinforcement learning-based computation offloading method for industrial internet of things
    Du, Ruizhong
    Wu, Jinru
    Gao, Yan
    [J]. JOURNAL OF SUPERCOMPUTING, 2024, 80 (17): : 25590 - 25615
  • [29] Energy-efficient UAV-enabled computation offloading for industrial internet of things: a deep reinforcement learning approach
    Shi, Shuo
    Wang, Meng
    Gu, Shushi
    Zheng, Zhong
    [J]. WIRELESS NETWORKS, 2024, 30 (05) : 3921 - 3934
  • [30] Joint Computation Offloading and Resource Allocation for Edge-Cloud Collaboration in Internet of Vehicles via Deep Reinforcement Learning
    Huang, Jiwei
    Wan, Jiangyuan
    Lv, Bofeng
    Ye, Qiang
    Chen, Ying
    [J]. IEEE SYSTEMS JOURNAL, 2023, 17 (02): : 2500 - 2511