Deadline-aware task offloading in vehicular networks using deep reinforcement learning

被引:1
|
作者
Farimani, Mina Khoshbazm [1 ]
Karimian-Aliabadi, Soroush [2 ]
Entezari-Maleki, Reza [1 ,3 ,4 ]
Egger, Bernhard [5 ]
Sousa, Leonel [4 ]
机构
[1] Iran Univ Sci & Technol, Sch Comp Engn, Tehran, Iran
[2] Sharif Univ Technol, Dept Comp Engn, Tehran, Iran
[3] Inst Res Fundamental Sci IPM, Sch Comp Sci, Tehran, Iran
[4] Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal
[5] Seoul Natl Univ, Dept Comp Sci & Engn, Seoul, South Korea
关键词
Computation offloading; Vehicular edge computing; Deep reinforcement learning; Deep Q-learning; Internet of vehicles; RESOURCE-ALLOCATION; EDGE; FRAMEWORK; RADIO;
D O I
10.1016/j.eswa.2024.123622
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Smart vehicles have a rising demand for computation resources, and recently vehicular edge computing has been recognized as an effective solution. Edge servers deployed in roadside units are capable of accomplishing tasks beyond the capacity which is embedded inside the vehicles. However, the main challenge is to carefully select the tasks to be offloaded considering the deadlines, and in order to reduce energy consumption, while delivering a good performance. In this paper, we consider a vehicular edge computing network in which multiple cars are moving at non-constant speed and produce tasks at each time slot. Then, we propose a task offloading algorithm, aware of the vehicle's direction, based on Rainbow, a deep Q-learning algorithm combining several independent improvements to the deep Q-network algorithm. This is to overcome the conventional limits and to reach an optimal offloading policy, by effectively incorporating the computation resources of edge servers to jointly minimize average delay and energy consumption. Real -world traffic data is used to evaluate the performance of the proposed approach compared to other algorithms, in particular deep Q-network, double deep Q-network, and deep recurrent Q-network. Results of the experiments show an average reduction of 18% and 15% in energy consumption and delay, respectively, when using the proposed Rainbow deep Q-network based algorithm in comparison to the state -of -the -art. Moreover, the stability and convergence of the learning process have significantly improved by adopting the Rainbow algorithm.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Towards Efficient Task Offloading With Dependency Guarantees in Vehicular Edge Networks Through Distributed Deep Reinforcement Learning
    Liu, Haoqiang
    Huang, Wenzheng
    Kim, Dong In
    Sun, Sumei
    Zeng, Yonghong
    Feng, Shaohan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (09) : 13665 - 13681
  • [32] A Multi-Layer Deep Reinforcement Learning Approach for Joint Task Offloading and Scheduling in Vehicular Edge Networks
    Wu, Jiaqi
    Ye, Ziyuan
    He, Lin
    Wang, Tong
    Gao, Lin
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3872 - 3877
  • [33] Asynchronous Deep Reinforcement Learning for Data-Driven Task Offloading in MEC-Empowered Vehicular Networks
    Dai, Penglin
    Hu, Kaiwen
    Wu, Xiao
    Xing, Huanlai
    Yu, Zhaofei
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [34] DECO: A Deadline-Aware and Energy-Efficient Algorithm for Task Offloading in Mobile Edge Computing
    Azizi, Sadoon
    Othman, Majeed
    Khamfroush, Hana
    IEEE SYSTEMS JOURNAL, 2023, 17 (01): : 952 - 963
  • [35] Privacy-Aware Multiagent Deep Reinforcement Learning for Task Offloading in VANET
    Wei, Dawei
    Zhang, Junying
    Shojafar, Mohammad
    Kumari, Saru
    Xi, Ning
    Ma, Jianfeng
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (11) : 13108 - 13122
  • [36] Adaptive Task Offloading for Mobile Aware Applications Based on Deep Reinforcement Learning
    Liu, Xianming
    Zhang, Chaokun
    He, Shen
    2022 IEEE 19TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2022), 2022, : 33 - 39
  • [37] CHRONUS: A Novel Deadline-aware Scheduler for Deep Learning Training Jobs
    Gao, Wei
    Ye, Zhisheng
    Sun, Peng
    Wen, Yonggang
    Zhang, Tianwei
    PROCEEDINGS OF THE 2021 ACM SYMPOSIUM ON CLOUD COMPUTING (SOCC '21), 2021, : 609 - 623
  • [38] Dynamic Vehicle Aware Task Offloading Based on Reinforcement Learning in a Vehicular Edge Computing Network
    Wang, Lingling
    Zhu, Xiumin
    Li, Nianxin
    Li, Yumei
    Ma, Shuyue
    Zhai, Linbo
    2022 18TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN, 2022, : 263 - 270
  • [39] Deep Reinforcement Learning-Guided Task Reverse Offloading in Vehicular Edge Computing
    Gu, Anqi
    Wu, Huaming
    Tang, Huijun
    Tang, Chaogang
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2200 - 2205
  • [40] Dynamic Task Placement for Deadline-Aware IoT Applications in Federated Fog Networks
    Sarkar, Indranil
    Adhikari, Mainak
    Kumar, Neeraj
    Kumar, Sanjay
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (02): : 1469 - 1478