Deadline-aware task offloading in vehicular networks using deep reinforcement learning

被引:1
|
作者
Farimani, Mina Khoshbazm [1 ]
Karimian-Aliabadi, Soroush [2 ]
Entezari-Maleki, Reza [1 ,3 ,4 ]
Egger, Bernhard [5 ]
Sousa, Leonel [4 ]
机构
[1] Iran Univ Sci & Technol, Sch Comp Engn, Tehran, Iran
[2] Sharif Univ Technol, Dept Comp Engn, Tehran, Iran
[3] Inst Res Fundamental Sci IPM, Sch Comp Sci, Tehran, Iran
[4] Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal
[5] Seoul Natl Univ, Dept Comp Sci & Engn, Seoul, South Korea
关键词
Computation offloading; Vehicular edge computing; Deep reinforcement learning; Deep Q-learning; Internet of vehicles; RESOURCE-ALLOCATION; EDGE; FRAMEWORK; RADIO;
D O I
10.1016/j.eswa.2024.123622
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Smart vehicles have a rising demand for computation resources, and recently vehicular edge computing has been recognized as an effective solution. Edge servers deployed in roadside units are capable of accomplishing tasks beyond the capacity which is embedded inside the vehicles. However, the main challenge is to carefully select the tasks to be offloaded considering the deadlines, and in order to reduce energy consumption, while delivering a good performance. In this paper, we consider a vehicular edge computing network in which multiple cars are moving at non-constant speed and produce tasks at each time slot. Then, we propose a task offloading algorithm, aware of the vehicle's direction, based on Rainbow, a deep Q-learning algorithm combining several independent improvements to the deep Q-network algorithm. This is to overcome the conventional limits and to reach an optimal offloading policy, by effectively incorporating the computation resources of edge servers to jointly minimize average delay and energy consumption. Real -world traffic data is used to evaluate the performance of the proposed approach compared to other algorithms, in particular deep Q-network, double deep Q-network, and deep recurrent Q-network. Results of the experiments show an average reduction of 18% and 15% in energy consumption and delay, respectively, when using the proposed Rainbow deep Q-network based algorithm in comparison to the state -of -the -art. Moreover, the stability and convergence of the learning process have significantly improved by adopting the Rainbow algorithm.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Research on task offloading optimization strategies for vehicular networks based on game theory and deep reinforcement learning
    Wang, Lei
    Zhou, Wenjiang
    Xu, Haitao
    Li, Liang
    Cai, Lei
    Zhou, Xianwei
    [J]. FRONTIERS IN PHYSICS, 2023, 11
  • [22] Task Offloading and Resource Allocation in Vehicular Networks: A Lyapunov-Based Deep Reinforcement Learning Approach
    Kumar, Anitha Saravana
    Zhao, Lian
    Fernando, Xavier
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (10) : 13360 - 13373
  • [23] Deadline-Aware Offloading for High-Throughput Accelerators
    Yeh, Tsung Tai
    Sinclair, Matthew D.
    Beckmann, Bradford M.
    Rogers, Timothy G.
    [J]. 2021 27TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2021), 2021, : 479 - 492
  • [24] Deadline-aware Peer-to-Peer Task Offloading in Stochastic Mobile Cloud Computing Systems
    Zhou, Chongyu
    Tham, Chen-Khong
    [J]. 2018 15TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2018, : 415 - 423
  • [25] Deadline-Aware Cache Placement Scheme Using Fuzzy Reinforcement Learning in Device-to-Device Mobile Edge Networks
    Manoj Kumar Somesula
    Anusha Kotte
    Sudarshan Chakravarthy Annadanam
    Sai Krishna Mothku
    [J]. Mobile Networks and Applications, 2022, 27 : 2100 - 2117
  • [26] Dependency-aware task offloading based on deep reinforcement learning in mobile edge computing networks
    Li, Junnan
    Yang, Zhengyi
    Chen, Kai
    Ming, Zhao
    Li, Xiuhua
    Fan, Qilin
    Hao, Jinlong
    Cheng, Luxi
    [J]. WIRELESS NETWORKS, 2024, 30 (06) : 5519 - 5531
  • [27] Online computation offloading for deadline-aware tasks in edge computing
    He, Xin
    Zheng, Jiaqi
    He, Qiang
    Dai, Haipeng
    Liu, Bowen
    Dou, Wanchun
    Chen, Guihai
    [J]. WIRELESS NETWORKS, 2024, 30 (05) : 4073 - 4092
  • [28] Deadline-Aware Cache Placement Scheme Using Fuzzy Reinforcement Learning in Device-to-Device Mobile Edge Networks
    Somesula, Manoj Kumar
    Kotte, Anusha
    Annadanam, Sudarshan Chakravarthy
    Mothku, Sai Krishna
    [J]. MOBILE NETWORKS & APPLICATIONS, 2022, 27 (05): : 2100 - 2117
  • [29] Deep reinforcement learning-based joint task offloading and resource allocation in multipath transmission vehicular networks
    Yin, Chenyang
    Zhang, Yuyang
    Dong, Ping
    Zhang, Hongke
    [J]. TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, 2024, 35 (01)
  • [30] Towards Efficient Task Offloading With Dependency Guarantees in Vehicular Edge Networks Through Distributed Deep Reinforcement Learning
    Liu, Haoqiang
    Huang, Wenzheng
    Kim, Dong In
    Sun, Sumei
    Zeng, Yonghong
    Feng, Shaohan
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (09) : 13665 - 13681