EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing

被引:26
|
作者
Li, Peisong [1 ]
Xiao, Ziren [1 ]
Wang, Xinheng [1 ]
Huang, Kaizhu [2 ,3 ]
Huang, Yi [4 ]
Gao, Honghao [5 ]
机构
[1] Xian Jiaotong Liverpool Univ, Sch Adv Technol, Suzhou 215123, Peoples R China
[2] Duke Kunshan Univ, Data Sci Res Ctr, Suzhou 215316, Peoples R China
[3] Duke Kunshan Univ, Div Nat & Appl Sci, Suzhou 215316, Peoples R China
[4] Univ Liverpool, Dept Elect Engn & Elec tron, Liverpool L69 3BX, Merseyside, England
[5] Shanghai Univ, Sch Comp Engn & Sci ence, Shanghai 200444, Peoples R China
来源
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES | 2024年 / 9卷 / 01期
基金
中国国家自然科学基金;
关键词
Task analysis; Resource management; Vehicle dynamics; Energy consumption; Dynamic scheduling; Processor scheduling; Heuristic algorithms; Proximal Policy Optimization; task scheduling; resource allocation; vehicular edge computing; RESOURCE-ALLOCATION;
D O I
10.1109/TIV.2023.3321679
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing complexity of vehicles has led to a growing demand for in-vehicle services that rely on multiple sensors. In the Vehicular Edge Computing (VEC) paradigm, energy-efficient task scheduling is critical to achieving optimal completion time and energy consumption. Although extensive research has been conducted in this field, challenges remain in meeting the requirements of time-sensitive services and adapting to dynamic traffic environments. In this context, a novel algorithm called Multi-action and Environment-adaptive Proximal Policy Optimization algorithm (MEPPO) is designed based on the conventional PPO algorithm and then a joint task scheduling and resource allocation method is proposed based on the designed MEPPO algorithm. In specific, the method involves three core aspects. Firstly, task scheduling strategy is designed to generate task offloading decisions and priority assignment decisions for the tasks utilizing PPO algorithm, which can further reduce the completion time of service requests. Secondly, transmit power allocation scheme is designed considering the expected transmission distance among vehicles and edge servers, which can minimize transmission energy consumption by adjusting the allocated transmit power dynamically. Thirdly, the proposed MEPPO-based scheduling method can make scheduling decisions for vehicles with different numbers of tasks by manipulating the state space of the PPO algorithm, which makes the proposed method be adaptive to real-world dynamic VEC environment. At last, the effectiveness of the proposed method is demonstrated through extensive simulation and on-site experiments.
引用
收藏
页码:1830 / 1846
页数:17
相关论文
共 50 条
  • [41] Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
    Moon, Sungwon
    Lim, Yujin
    SENSORS, 2022, 22 (24)
  • [42] Delay-Aware and Energy-Efficient Computation Offloading in Mobile-Edge Computing Using Deep Reinforcement Learning
    Ale, Laha
    Zhang, Ning
    Fang, Xiaojie
    Chen, Xianfu
    Wu, Shaohua
    Li, Longzhuang
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (03) : 881 - 892
  • [43] Deep Reinforcement Learning for Energy-Efficient Computation Offloading in Mobile-Edge Computing
    Zhou, Huan
    Jiang, Kai
    Liu, Xuxun
    Li, Xiuhua
    Leung, Victor C. M.
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (02): : 1517 - 1530
  • [44] Priority-Aware Deployment of Autoscaling Service Function Chains Based on Deep Reinforcement Learning
    Yu, Xue
    Wang, Ran
    Hao, Jie
    Wu, Qiang
    Yi, Changyan
    Wang, Ping
    Niyato, Dusit
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (03) : 1050 - 1062
  • [45] Reinforcement Learning Based Energy-Efficient Collaborative Inference for Mobile Edge Computing
    Xiao, Yilin
    Xiao, Liang
    Wan, Kunpeng
    Yang, Helin
    Zhang, Yi
    Wu, Yi
    Zhang, Yanyong
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (02) : 864 - 876
  • [46] Energy-Efficient Task Offloading and Resource Scheduling for Mobile Edge Computing
    Yu, Hongyan
    Wang, Quyuan
    Guo, Songtao
    2018 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, ARCHITECTURE AND STORAGE (NAS), 2018,
  • [47] Priority-Aware Resource Scheduling for UAV-Mounted Mobile Edge Computing Networks
    Zhou, Wenqi
    Fan, Lisheng
    Zhou, Fasheng
    Li, Feng
    Lei, Xianfu
    Xu, Wei
    Nallanathan, Arumugam
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (07) : 9682 - 9687
  • [48] Curiosity-Driven Energy-Efficient Worker Scheduling in Vehicular Crowdsourcing: A Deep Reinforcement Learning Approach
    Liu, Chi Harold
    Zhao, Yinuo
    Dai, Zipeng
    Yuan, Ye
    Wang, Guoren
    Wu, Dapeng
    Leung, Kin K.
    2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2020), 2020, : 25 - 36
  • [49] Deep Reinforcement Learning-Guided Task Reverse Offloading in Vehicular Edge Computing
    Gu, Anqi
    Wu, Huaming
    Tang, Huijun
    Tang, Chaogang
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2200 - 2205
  • [50] Adaptive Prioritization and Task Offloading in Vehicular Edge Computing Through Deep Reinforcement Learning
    Uddin, Ashab
    Sakr, Ahmed Hamdi
    Zhang, Ning
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (03) : 5038 - 5052