Adaptive Inference Reinforcement Learning for Task Offloading in Vehicular Edge Computing Systems

被引:11
|
作者
Tang, Dian [1 ]
Zhang, Xuefei [1 ]
Li, Meng [1 ]
Tao, Xiaofeng [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Engn Lab Mobile Network Technol, Beijing, Peoples R China
基金
北京市自然科学基金;
关键词
MOBILE; NETWORKS;
D O I
10.1109/iccworkshops49005.2020.9145133
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vehicular edge computing (VEC) is expected as a promising technology to improve the quality of innovative applications in vehicular networks through computation offloading. However, in VEC system, the characteristics of distributed computing resources and high mobility of vehicles bring a critical challenge, i.e., whether to execute computation task locally or in edge servers can obtain the least computation overhead. In this paper, we study the VEC system for a representative vehicle with multiple dependent tasks that need to be processed successively, where nearby vehicles with computing servers can be selected for offloading. Considering the migration cost incurred during position shift procedure, a sequential decision making problem is formulated to minimize the overall costs of delay and energy consumption. To tackle it effectively, we propose a deep Q network algorithm by introducing Bayesian inference taking advantage of priori distribution and statistical information, which adapts to the environmental dynamics in a smarter manner. Numerical results demonstrate our proposed learning-based algorithm achieve a significant improvement in overall cost of task execution compared with other baseline policies.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Task offloading for vehicular edge computing with imperfect CSI: A deep reinforcement approach
    Wu, Yuxin
    Xia, Junjuan
    Gao, Chongzhi
    Ou, Jiangtao
    Fan, Chengyuan
    Ou, Jianghong
    Fan, Dahua
    PHYSICAL COMMUNICATION, 2022, 55
  • [22] Reinforcement learning based tasks offloading in vehicular edge computing networks
    Cao, Shaohua
    Liu, Di
    Dai, Congcong
    Wang, Chengqi
    Yang, Yansheng
    Zhang, Weishan
    Zheng, Danyang
    COMPUTER NETWORKS, 2023, 234
  • [23] Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System
    Ning, Zhaolong
    Dong, Peiran
    Wang, Xiaojie
    Rodrigues, Joel J. P. C.
    Xia, Feng
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2019, 10 (06)
  • [24] Reinforcement Learning for Optimizing Delay-Sensitive Task Offloading in Vehicular Edge-Cloud Computing
    Binh, Ta Huu
    Son, Do Bao
    Vo, Hiep
    Nguyen, Binh Minh
    Binh, Huynh Thi Thanh
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (02): : 2058 - 2069
  • [25] Value-based reinforcement learning approaches for task offloading in Delay Constrained Vehicular Edge Computing
    Do Bao Son
    Ta Huu Binh
    Vo, Hiep Khac
    Binh Minh Nguyen
    Huynh Thi Thanh Binh
    Yu, Shui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 113
  • [26] Self-Adaptive Learning of Task Offloading in Mobile Edge Computing Systems
    Huang, Peng
    Deng, Minjiang
    Kang, Zhiliang
    Liu, Qinshan
    Xu, Lijia
    ENTROPY, 2021, 23 (09)
  • [27] Deep Learning-Based Task Offloading for Vehicular Edge Computing
    Zeng, Feng
    Liu, Chengsheng
    Tangjiang, Junzhe
    Li, Wenjia
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT III, 2021, 12939 : 291 - 298
  • [28] Reinforcement-Learning-Based Task Offloading in Edge Computing Systems with Firm Deadlines
    Doan, Khai
    Araujo, Wesley
    Kranakis, Evangelos
    Lambadaris, Ioannis
    Viniotis, Yannis
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 934 - 940
  • [29] Task Migration Based on Reinforcement Learning in Vehicular Edge Computing
    Moon, Sungwon
    Park, Jaesung
    Lim, Yujin
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [30] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ COMPUTER SCIENCE, 2022, 8