Adaptive Inference Reinforcement Learning for Task Offloading in Vehicular Edge Computing Systems

被引:11
|
作者
Tang, Dian [1 ]
Zhang, Xuefei [1 ]
Li, Meng [1 ]
Tao, Xiaofeng [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Engn Lab Mobile Network Technol, Beijing, Peoples R China
基金
北京市自然科学基金;
关键词
MOBILE; NETWORKS;
D O I
10.1109/iccworkshops49005.2020.9145133
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vehicular edge computing (VEC) is expected as a promising technology to improve the quality of innovative applications in vehicular networks through computation offloading. However, in VEC system, the characteristics of distributed computing resources and high mobility of vehicles bring a critical challenge, i.e., whether to execute computation task locally or in edge servers can obtain the least computation overhead. In this paper, we study the VEC system for a representative vehicle with multiple dependent tasks that need to be processed successively, where nearby vehicles with computing servers can be selected for offloading. Considering the migration cost incurred during position shift procedure, a sequential decision making problem is formulated to minimize the overall costs of delay and energy consumption. To tackle it effectively, we propose a deep Q network algorithm by introducing Bayesian inference taking advantage of priori distribution and statistical information, which adapts to the environmental dynamics in a smarter manner. Numerical results demonstrate our proposed learning-based algorithm achieve a significant improvement in overall cost of task execution compared with other baseline policies.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Fuzzy Reinforcement Learning for energy efficient task offloading in Vehicular Fog Computing
    Vemireddy, Satish
    Rout, Rashmi Ranjan
    COMPUTER NETWORKS, 2021, 199
  • [32] An RSU-crossed dependent task offloading scheme for vehicular edge computing based on deep reinforcement learning
    Bi, Xiang
    Shi, Jianing
    Zhang, Benhong
    Lyu, Zengwei
    Huang, Lingjie
    INTERNATIONAL JOURNAL OF SENSOR NETWORKS, 2023, 41 (04) : 244 - 256
  • [33] A collaborative computation and dependency-aware task offloading method for vehicular edge computing: a reinforcement learning approach
    Liu, Guozhi
    Dai, Fei
    Huang, Bi
    Qiang, Zhenping
    Wang, Shuai
    Li, Lecheng
    JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2022, 11 (01):
  • [34] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ, 2022, 10
  • [35] Deep Reinforcement Learning-Based Computation Offloading in Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Min, Geyong
    Duan, Hancong
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [36] A collaborative computation and dependency-aware task offloading method for vehicular edge computing: a reinforcement learning approach
    Guozhi Liu
    Fei Dai
    Bi Huang
    Zhenping Qiang
    Shuai Wang
    Lecheng Li
    Journal of Cloud Computing, 11
  • [37] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [38] Deep-Reinforcement-Learning-Based Offloading Scheduling for Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Wang, Chao
    Min, Geyong
    Duan, Hancong
    Zhu, Qingxin
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (06) : 5449 - 5465
  • [39] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu X.
    Huang Y.
    PeerJ Computer Science, 2022, 8
  • [40] Offline Reinforcement Learning for Asynchronous Task Offloading in Mobile Edge Computing
    Zhang, Bolei
    Xiao, Fu
    Wu, Lifa
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (01): : 939 - 952