Deep Reinforcement Learning-Based Task Offloading and Resource Allocation for Industrial IoT in MEC Federation System

被引:11
|
作者
Do, Huong Mai [1 ]
Tran, Tuan Phong
Yoo, Myungsik [2 ]
机构
[1] Soongsil Univ, Dept Informat Commun Convergence Technol, Seoul, South Korea
[2] Soongsil Univ, Sch Elect Engn, Seoul, South Korea
关键词
~MEC federation; IIoT; task offloading; resource allocation; Markov decision process; deep reinforcement learning; NETWORKS; INTERNET;
D O I
10.1109/ACCESS.2023.3302518
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid growth of the Internet of Things (IoT) has resulted in the development of intelligent industrial systems known as Industrial IoT (IIoT). These systems integrate smart devices, sensors, cameras, and 5G technologies to enable automated data gathering and analysis boost production efficiency and overcome scalability issues. However, IoT devices have limited computer power, memory, and battery capacities. To address these challenges, mobile edge computing (MEC) has been introduced to IIoT systems to reduce the computational burden on the devices. While the dedicated MEC paradigm limits optimal resource utilization and load balancing, the MEC federation can potentially overcome these drawbacks. However, previous studies have relied on idealized assumptions when developing optimal models, raising concerns about their practical applicability. In this study, we investigated the joint decision offloading and resource allocation problem for MEC federation in the IIoT. Specifically, an optimization model was constructed based on all real-world factors influencing system performance. To minimize the total energy delay cost, the original problem was transformed into a Markov decision process. Considering task generation dynamics and continuity, we addressed the Markov decision process using a deep reinforcement learning method. We propose a deep deterministic policy gradient algorithm with prioritized experience replay (DDPG-PER)-based resource allocation that can handle high-dimensional continuity of action and state spaces. The simulation results indicate that the proposed approach effectively minimizes the energy-delay costs associated with tasks.
引用
收藏
页码:83150 / 83170
页数:21
相关论文
共 50 条
  • [41] Cooperative Partial Task Offloading and Resource Allocation for IIoT Based on Decentralized Multiagent Deep Reinforcement Learning
    Zhang, Fan
    Han, Guangjie
    Liu, Li
    Zhang, Yu
    Peng, Yan
    Li, Chao
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (03) : 5526 - 5544
  • [42] Task Offloading and Resource Allocation in Vehicular Networks: A Lyapunov-Based Deep Reinforcement Learning Approach
    Kumar, Anitha Saravana
    Zhao, Lian
    Fernando, Xavier
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (10) : 13360 - 13373
  • [43] VEC Collaborative Task Offloading and Resource Allocation Based on Deep Reinforcement Learning Under Parking Assistance
    Xue, Jianbin
    Shao, Fei
    Zhang, Tingjuan
    Tian, Guiying
    Jiang, Hengjie
    [J]. WIRELESS PERSONAL COMMUNICATIONS, 2024, 136 (01) : 321 - 345
  • [44] Deep reinforcement learning based task offloading and resource allocation strategy across multiple edge servers
    Shi, Bing
    Pan, Yuting
    Huang, Lianzhen
    [J]. SERVICE ORIENTED COMPUTING AND APPLICATIONS, 2024,
  • [45] A Novel Deep Reinforcement Learning Approach for Task Offloading in MEC Systems
    Liu, Xiaowei
    Jiang, Shuwen
    Wu, Yi
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [46] Cache-Aided MEC for IoT: Resource Allocation Using Deep Graph Reinforcement Learning
    Wang, Dan
    Bai, Yalu
    Huang, Gang
    Song, Bin
    Yu, F. Richard
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (13) : 11486 - 11496
  • [47] Deep reinforcement learning-based joint optimization of computation offloading and resource allocation in F-RAN
    Jo, Sonnam
    Kim, Ung
    Kim, Jaehyon
    Jong, Chol
    Pak, Changsop
    [J]. IET COMMUNICATIONS, 2023, 17 (05) : 549 - 564
  • [48] Dynamic Resource Allocation and Task Offloading for NOMA-Enabled IoT Services in MEC
    Xing, Hua
    Xu, Jiajie
    Hu, Jintao
    Chen, Ying
    Huang, Jiwei
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [49] Deep Reinforcement Learning-Based Adaptive Computation Offloading for MEC in Heterogeneous Vehicular Networks
    Ke, Hongchang
    Wang, Jian
    Deng, Lingyue
    Ge, Yuming
    Wang, Hui
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (07) : 7916 - 7929
  • [50] Deep Reinforcement Learning-based Mining Task Offloading Scheme for Intelligent Connected Vehicles in UAV-aided MEC
    Li, Chunlin
    Jiang, Kun
    Zhang, Yong
    Jiang, Lincheng
    Luo, Youlong
    Wan, Shaohua
    [J]. ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2024, 29 (03)