Joint Service Caching and Computation Offloading Scheme Based on Deep Reinforcement Learning in Vehicular Edge Computing Systems

被引:25
|
作者
Xue, Zheng [1 ]
Liu, Chang [1 ]
Liao, Canliang [1 ]
Han, Guojun [1 ]
Sheng, Zhengguo [2 ]
机构
[1] Guangdong Univ Technol, Sch Informat Engn, Guangzhou 510006, Peoples R China
[2] Univ Sussex, Dept Engn & Design, Brighton BN19RH, England
关键词
Task analysis; Servers; Delays; Vehicle dynamics; Optimization; Edge computing; Resource management; Vehicular edge computing; service caching; computation offloading; deep reinforcement learning; RESOURCE-ALLOCATION; EFFICIENT;
D O I
10.1109/TVT.2023.3234336
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vehicular edge computing (VEC) is a new computing paradigm that enhances vehicular performance by introducing both computation offloading and service caching, to resource-constrained vehicles and ubiquitous edge servers. Recent developments of autonomous vehicles enable a variety of applications that demand high computing resources and low latency, such as automatic driving, auto navigation, etc. However, the highly dynamic topology of vehicular networks and limited caching space at resource-constrained edge servers calls for intelligent design of caching placement and computation offloading. Meanwhile, service caching decisions are highly correlated to the computation offloading decisions, which pose a great challenge to effectively design service caching and computation offloading strategies. In this paper, we investigate a joint optimization problem by integrating service caching and computation offloading in a general VEC scenario with time-varying task requests. To minimize the average task processing delay, we formulate the problem using long-term mixed integer non-linear programming (MINLP) and propose an algorithm based on deep reinforcement learning to obtain a suboptimal solution with low computation complexity. The simulation results demonstrate that our proposed scheme exhibits an effective performance improvement in task processing delay compared with other representative benchmark methods.
引用
收藏
页码:6709 / 6722
页数:14
相关论文
共 50 条
  • [41] Reinforcement learning based tasks offloading in vehicular edge computing networks
    Cao, Shaohua
    Liu, Di
    Dai, Congcong
    Wang, Chengqi
    Yang, Yansheng
    Zhang, Weishan
    Zheng, Danyang
    [J]. COMPUTER NETWORKS, 2023, 234
  • [42] DDPG-based Computation Offloading and Service Caching in Mobile Edge Computing
    Chen, Lingxiao
    Gong, Guoqiang
    Jiang, Kai
    Zhou, Huan
    Chen, Rui
    [J]. IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
  • [43] Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
    Moon, Sungwon
    Lim, Yujin
    [J]. SENSORS, 2022, 22 (24)
  • [44] Deep Reinforcement Learning-Based Task Offloading and Service Migrating Policies in Service Caching-Assisted Mobile Edge Computing
    Ke Hongchang
    Wang Hui
    Sun Hongbin
    Halvin Yang
    [J]. China Communications, 2024, 21 (04) : 88 - 103
  • [45] Deep reinforcement learning-based task offloading and service migrating policies in service caching-assisted mobile edge computing
    Ke, Hongchang
    Hui, Wan
    Sun, Hongbin
    Yan, Halvin
    [J]. CHINA COMMUNICATIONS, 2024, 21 (04) : 88 - 103
  • [46] Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning
    Chen, Xianfu
    Zhang, Honggang
    Wu, Celimuge
    Mao, Shiwen
    Ji, Yusheng
    Bennis, Mehdi
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03): : 4005 - 4018
  • [47] A Computing Offloading Resource Allocation Scheme Using Deep Reinforcement Learning in Mobile Edge Computing Systems
    Xuezhu Li
    [J]. Journal of Grid Computing, 2021, 19
  • [48] A Computing Offloading Resource Allocation Scheme Using Deep Reinforcement Learning in Mobile Edge Computing Systems
    Li, Xuezhu
    [J]. JOURNAL OF GRID COMPUTING, 2021, 19 (03)
  • [49] CoPace: Edge Computation Offloading and Caching for Self-Driving With Deep Reinforcement Learning
    Tian, Hao
    Xu, Xiaolong
    Qi, Lianyong
    Zhang, Xuyun
    Dou, Wanchun
    Yu, Shui
    Ni, Qiang
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (12) : 13281 - 13293
  • [50] Federated Deep Reinforcement Learning for Joint AeBSs Deployment and Computation Offloading in Aerial Edge Computing Network
    Liu, Lei
    Zhao, Yikun
    Qi, Fei
    Zhou, Fanqin
    Xie, Weiliang
    He, Haoran
    Zheng, Hao
    [J]. ELECTRONICS, 2022, 11 (21)