A deep-reinforcement-learning-based strategy selection approach for fault-tolerant offloading of delay-sensitive tasks in vehicular edge-cloud computingA deep-reinforcement-learning-based strategy selection approach…V. Babaiyan, O. Bushehrian

被引:0
|
作者
Vahide Babaiyan [1 ]
Omid Bushehrian [1 ]
机构
[1] Shiraz University of Technology,Department of Computer Engineering and Information Technology
关键词
Mobile edge-cloud computing; Fault-tolerant task offloading; Recovery pattern; Deep reinforcement learning;
D O I
10.1007/s11227-025-07196-9
中图分类号
学科分类号
摘要
Given the high resource demands of delay-sensitive tasks in vehicular networks, task offloading techniques have become prevalent in Mobile Edge-Cloud Computing (MECC) systems to reduce task completion times. Numerous studies have explored and addressed the task offloading problem in dynamic and unpredictable MECC environments using various Deep Reinforcement Learning (DRL) approaches. However, the critical issue of fault-tolerant (FT) task offloading in vehicular networks has not been comprehensively addressed. Failures of offloaded tasks to MECC nodes can significantly impact the quality of service in vehicular networks and potentially lead to catastrophic outcomes in critical vehicular tasks. To address this challenge, this paper proposes a DRL-based FT task offloading method for MECC environments to minimize the average response time and latency of all tasks under faulty conditions. An analytical model of the optimization problem is developed, followed by a Deep Deterministic Policy Gradient (DDPG) algorithm to determine the optimal deployment and recovery patterns for delay-sensitive tasks. The actor-critic architecture of DDPG, where the actor determines the task execution plans (including primary/backup nodes and the optimal recovery strategy) based on the current state, and the critic evaluates the decision, allows DDPG to adapt more smoothly to dynamic, continuous environments like vehicular networks, where the system’s state, available resources, and failure rates are constantly changing. Our results show that by implementing diverse failure recovery strategies in high failure environments, our proposed method can reduce the total task completion time and the number of failures by an average of 29% and 78%, respectively, compared to baseline methods. Furthermore, the method’s adaptability to changes in available computational resources and varying failure rates is clearly demonstrated.
引用
收藏
相关论文
共 3 条
  • [1] Multiagent Federated Deep-Reinforcement-Learning-Based Collaborative Caching Strategy for Vehicular Edge Networks
    Wu, Honghai
    Wang, Baibing
    Ma, Huahong
    Zhang, Xiaohui
    Xing, Ling
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (14): : 25198 - 25212
  • [2] Task Offloading Decision-Making Algorithm for Vehicular Edge Computing: A Deep-Reinforcement-Learning-Based Approach
    Shi, Wei
    Chen, Long
    Zhu, Xia
    SENSORS, 2023, 23 (17)
  • [3] A Delay-Optimal Task Scheduling Strategy for Vehicle Edge Computing Based on the Multi-Agent Deep Reinforcement Learning Approach
    Nie, Xuefang
    Yan, Yunhui
    Zhou, Tianqing
    Chen, Xingbang
    Zhang, Dingding
    ELECTRONICS, 2023, 12 (07)