Given the high resource demands of delay-sensitive tasks in vehicular networks, task offloading techniques have become prevalent in Mobile Edge-Cloud Computing (MECC) systems to reduce task completion times. Numerous studies have explored and addressed the task offloading problem in dynamic and unpredictable MECC environments using various Deep Reinforcement Learning (DRL) approaches. However, the critical issue of fault-tolerant (FT) task offloading in vehicular networks has not been comprehensively addressed. Failures of offloaded tasks to MECC nodes can significantly impact the quality of service in vehicular networks and potentially lead to catastrophic outcomes in critical vehicular tasks. To address this challenge, this paper proposes a DRL-based FT task offloading method for MECC environments to minimize the average response time and latency of all tasks under faulty conditions. An analytical model of the optimization problem is developed, followed by a Deep Deterministic Policy Gradient (DDPG) algorithm to determine the optimal deployment and recovery patterns for delay-sensitive tasks. The actor-critic architecture of DDPG, where the actor determines the task execution plans (including primary/backup nodes and the optimal recovery strategy) based on the current state, and the critic evaluates the decision, allows DDPG to adapt more smoothly to dynamic, continuous environments like vehicular networks, where the system’s state, available resources, and failure rates are constantly changing. Our results show that by implementing diverse failure recovery strategies in high failure environments, our proposed method can reduce the total task completion time and the number of failures by an average of 29% and 78%, respectively, compared to baseline methods. Furthermore, the method’s adaptability to changes in available computational resources and varying failure rates is clearly demonstrated.