Adaptive Inference Reinforcement Learning for Task Offloading in Vehicular Edge Computing Systems

被引:11
|
作者
Tang, Dian [1 ]
Zhang, Xuefei [1 ]
Li, Meng [1 ]
Tao, Xiaofeng [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Engn Lab Mobile Network Technol, Beijing, Peoples R China
基金
北京市自然科学基金;
关键词
MOBILE; NETWORKS;
D O I
10.1109/iccworkshops49005.2020.9145133
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vehicular edge computing (VEC) is expected as a promising technology to improve the quality of innovative applications in vehicular networks through computation offloading. However, in VEC system, the characteristics of distributed computing resources and high mobility of vehicles bring a critical challenge, i.e., whether to execute computation task locally or in edge servers can obtain the least computation overhead. In this paper, we study the VEC system for a representative vehicle with multiple dependent tasks that need to be processed successively, where nearby vehicles with computing servers can be selected for offloading. Considering the migration cost incurred during position shift procedure, a sequential decision making problem is formulated to minimize the overall costs of delay and energy consumption. To tackle it effectively, we propose a deep Q network algorithm by introducing Bayesian inference taking advantage of priori distribution and statistical information, which adapts to the environmental dynamics in a smarter manner. Numerical results demonstrate our proposed learning-based algorithm achieve a significant improvement in overall cost of task execution compared with other baseline policies.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Adaptive Task Offloading in Vehicular Edge Computing Networks: a Reinforcement Learning Based Scheme
    Jie Zhang
    Hongzhi Guo
    Jiajia Liu
    [J]. Mobile Networks and Applications, 2020, 25 : 1736 - 1745
  • [2] Adaptive Task Offloading in Vehicular Edge Computing Networks: a Reinforcement Learning Based Scheme
    Zhang, Jie
    Guo, Hongzhi
    Liu, Jiajia
    [J]. MOBILE NETWORKS & APPLICATIONS, 2020, 25 (05): : 1736 - 1745
  • [3] Adaptive Learning-Based Task Offloading for Vehicular Edge Computing Systems
    Sun, Yuxuan
    Guo, Xueying
    Song, Jinhui
    Zhou, Sheng
    Jiang, Zhiyuan
    Liu, Xin
    Niu, Zhisheng
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (04) : 3061 - 3074
  • [4] Meta Reinforcement Learning for Multi-Task Offloading in Vehicular Edge Computing
    Dai, Penglin
    Huang, Yaorong
    Hu, Kaiwen
    Wu, Xiao
    Xing, Huanlai
    Yu, Zhaofei
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (03) : 2123 - 2138
  • [5] Task offloading in vehicular edge computing networks via deep reinforcement learning
    Karimi, Elham
    Chen, Yuanzhu
    Akbari, Behzad
    [J]. COMPUTER COMMUNICATIONS, 2022, 189 : 193 - 204
  • [6] Trusted Task Offloading in Vehicular Edge Computing Networks: A Reinforcement Learning Based Solution
    Zhang, Lushi
    Guo, Hongzhi
    Zhou, Xiaoyi
    Liu, Jiajia
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6711 - 6716
  • [7] Deep Reinforcement Learning-Guided Task Reverse Offloading in Vehicular Edge Computing
    Gu, Anqi
    Wu, Huaming
    Tang, Huijun
    Tang, Chaogang
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2200 - 2205
  • [8] Mean-field reinforcement learning for decentralized task offloading in vehicular edge computing
    Shen, Si
    Shen, Guojiang
    Yang, Xiaoxue
    Xia, Feng
    Du, Hao
    Kong, Xiangjie
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 146
  • [9] Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems
    Tang, Ming
    Wong, Vincent W. S.
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) : 1985 - 1997
  • [10] Adaptive Task Offloading in Coded Edge Computing: A Deep Reinforcement Learning Approach
    Nguyen Van Tam
    Nguyen Quang Hieu
    Nguyen Thi Thanh Van
    Nguyen Cong Luong
    Niyato, Dusit
    Kim, Dong In
    [J]. IEEE COMMUNICATIONS LETTERS, 2021, 25 (12) : 3878 - 3882