Time-Sensitive and Resource-Aware Concurrent Workflow Scheduling for Edge Computing Platforms Based on Deep Reinforcement Learning

被引:0
|
作者
Zhang, Jiaming [1 ]
Wang, Tao [2 ]
Cheng, Lianglun [2 ]
机构
[1] Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou 510006, Peoples R China
[2] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 19期
基金
中国国家自然科学基金;
关键词
edge computing; workflow scheduling; deep reinforcement learning; proximal policy optimization; transformer; ALGORITHM;
D O I
10.3390/app131910689
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The workflow scheduling on edge computing platforms in industrial scenarios aims to efficiently utilize the computing resources of edge platforms to meet user service requirements. Compared to ordinary task scheduling, tasks in workflow scheduling come with predecessor and successor constraints. The solutions to scheduling problems typically include traditional heuristic methods and modern deep reinforcement learning approaches. For heuristic methods, an increase in constraints complicates the design of scheduling rules, making it challenging to devise suitable algorithms. Additionally, whenever the environment undergoes updates, it necessitates the redesign of the scheduling algorithms. For existing deep reinforcement learning-based scheduling methods, there are often challenges related to training difficulty and computation time. The addition of constraints makes it challenging for neural networks to make decisions while satisfying those constraints. Furthermore, previous methods mainly relied on RNN and its variants to construct neural network models, lacking a computation time advantage. In response to these issues, this paper introduces a novel workflow scheduling method based on reinforcement learning, which utilizes neural networks for direct decision-making. On the one hand, this approach leverages deep reinforcement learning, eliminating the need for researchers to define complex scheduling rules. On the other hand, it separates the parsing of the workflow and constraint handling from the scheduling decisions, allowing the neural network model to focus on learning how to schedule without the necessity of learning how to handle workflow definitions and constraints among sub-tasks. The method optimizes resource utilization and response time, as its objectives and the network are trained using the PPO algorithm combined with Self-Critic, and the parameter transfer strategy is utilized to find the balance point for multi-objective optimization. Leveraging the advantages of reinforcement learning, the network can be trained and tested using randomly generated datasets. The experimental results indicate that the proposed method can generate different scheduling outcomes to meet various scenario requirements without modifying the neural network. Furthermore, when compared to other deep reinforcement learning methods, the proposed approach demonstrates certain advantages in scheduling performance and computation time.
引用
收藏
页数:27
相关论文
共 50 条
  • [1] RASM: Resource-Aware Service Migration in Edge Computing based on Deep Reinforcement Learning
    Mwasinga, Lusungu Josh
    Le, Duc-Tai
    Raza, Syed M.
    Challa, Rajesh
    Kim, Moonseong
    Choo, Hyunseung
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2023, 182
  • [2] Resource-Aware Online Traffic Scheduling for Time-Sensitive Networking
    Hong, Xinyi
    Xi, Yuhao
    Liu, Peng
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024,
  • [3] Location Aware Workflow Migration Based on Deep Reinforcement Learning in Mobile Edge Computing
    Gao, Yongqiang
    Liu, Xiaolei
    [J]. ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT I, 2022, 13155 : 509 - 528
  • [4] Reinforcement learning based routing for time-aware shaper scheduling in time-sensitive networks
    Min, Junhong
    Kim, Yongjun
    Kim, Moonbeom
    Paek, Jeongyeup
    Govindan, Ramesh
    [J]. COMPUTER NETWORKS, 2023, 235
  • [5] Deep Reinforcement Learning-Based Adaptive Scheduling for Wireless Time-Sensitive Networking
    Kim, Hanjin
    Kim, Young-Jin
    Kim, Won-Tae
    [J]. SENSORS, 2024, 24 (16)
  • [6] Reinforcement Learning for Security-Aware Workflow Application Scheduling in Mobile Edge Computing
    Huang, Binbin
    Xiang, Yuanyuan
    Yu, Dongjin
    Wang, Jiaojiao
    Li, Zhongjin
    Wang, Shangguang
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2021, 2021
  • [7] Resource-aware in-edge distributed real-time deep learning
    Yoosefi, Amin
    Kargahi, Mehdi
    [J]. INTERNET OF THINGS, 2024, 27
  • [8] Deep Reinforcement Learning based Energy Scheduling for Edge Computing
    Yang, Qinglin
    Li, Peng
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 175 - 180
  • [9] A deep reinforcement learning based hybrid algorithm for efficient resource scheduling in edge computing environment
    Xue, Fei
    Hai, Qiuru
    Dong, Tingting
    Cui, Zhihua
    Gong, Yuelu
    [J]. INFORMATION SCIENCES, 2022, 608 : 362 - 374
  • [10] Rendering time-sensitive cloud computing resource scheduling method based on DDPG
    Ye, Wei
    Shi, Yue
    [J]. PROCEEDINGS OF 2024 INTERNATIONAL CONFERENCE ON COMPUTER AND MULTIMEDIA TECHNOLOGY, ICCMT 2024, 2024, : 569 - 573