Multi-step reinforcement learning-based offloading for vehicle edge computing

被引:0
|
作者
Han Shaodong [1 ]
Chen Yingqun [1 ]
Chen Guihong [1 ]
Yin, Jiao [2 ]
Hang, Hua [2 ]
Cao, Jinli [3 ]
机构
[1] Guangdong Polytechn Normal Univ, Sch Cyber Secur, Guangzhou, Peoples R China
[2] Victoria Univ, Inst Sustainable Ind & Liveable Cities, Melbourne, Vic, Australia
[3] La Trobe Univ, Melbourne, Vic, Australia
关键词
internet of vehicles; edge computing; markov decision process; deep reinforcement learning; INTERNET;
D O I
10.1109/ICACI58115.2023.10146186
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Internet of Vehicles (IoV) system has recently attracted more attention. However, IoV applications require massive computations within strict time limits. Computation energy consumption is also a significant concern in IoV applications. Thus, this paper establishes a vehicle edge computing architecture by combining edge computing and IoV to improve the computation ability of IoV. To optimize the offloading computation process, we model the entire process as a Markov decision process (MDP). Computation delay, computation energy consumption and communication quality are considered in a utility function to establish a multi-objective optimization problem. A deep reinforcement learning algorithm based on a multi-step deep Q network (MSDQN) is proposed to solve the MDP without considering the complicated transmission channels. Especially, the optimal multi-step value is found via experiments. Simulation results show that the proposed offloading algorithm can significantly reduce the IoV computation delay and computation energy consumption in processing computationally intensive tasks.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] A multi-layer guided reinforcement learning-based tasks offloading in edge computing
    Robles-Enciso, Alberto
    Skarmeta, Antonio F.
    [J]. COMPUTER NETWORKS, 2023, 220
  • [2] Deep Reinforcement Learning-Based Computation Offloading in Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Min, Geyong
    Duan, Hancong
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [3] ADRLO: Adaptive deep reinforcement learning-based offloading for edge computing
    Li, Zhigang
    Wang, Yutong
    Zhang, Wentao
    Li, Shujie
    Sun, Xiaochuan
    [J]. PHYSICAL COMMUNICATION, 2023, 61
  • [4] Deep reinforcement learning-based multitask hybrid computing offloading for multiaccess edge computing
    Cai, Jun
    Fu, Hongtian
    Liu, Yan
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (09) : 6221 - 6243
  • [5] Reinforcement Learning-Based Mobile Offloading for Edge Computing Against Jamming and Interference
    Xiao, Liang
    Lu, Xiaozhen
    Xu, Tangwei
    Wan, Xiaoyue
    Ji, Wen
    Zhang, Yanyong
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2020, 68 (10) : 6114 - 6126
  • [6] Deep Reinforcement Learning-Based Offloading Decision Optimization in Mobile Edge Computing
    Zhang, Hao
    Wu, Wenjun
    Wang, Chaoyi
    Li, Meng
    Yang, Ruizhe
    [J]. 2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2019,
  • [7] Graph Reinforcement Learning-based CNN Inference Offloading in Dynamic Edge Computing
    Li, Nan
    Iosifidis, Alexandros
    Zhang, Qi
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 982 - 987
  • [8] Reinforcement learning-based computation offloading in edge computing: Principles, methods, challenges
    Luo, Zhongqiang
    Dai, Xiang
    [J]. ALEXANDRIA ENGINEERING JOURNAL, 2024, 108 : 89 - 107
  • [9] Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
    Ziying Wu
    Danfeng Yan
    [J]. China Communications, 2021, 18 (11) : 26 - 41
  • [10] Deep reinforcement learning-based computation offloading for 5G vehicle-aware multi-access edge computing network
    Wu, Ziying
    Yan, Danfeng
    [J]. CHINA COMMUNICATIONS, 2021, 18 (11) : 26 - 41