Cloud-Edge-End Collaborative Task Offloading in Vehicular Edge Networks: A Multilayer Deep Reinforcement Learning Approach

被引:0
|
作者
Wu, Jiaqi [1 ,2 ]
Tang, Ming [3 ]
Jiang, Changkun [4 ]
Gao, Lin [1 ,2 ]
Cao, Bin [1 ,2 ]
机构
[1] Harbin Inst Technol, Sch Elect & Informat Engn, Shenzhen 518055, Peoples R China
[2] Harbin Inst Technol, Guangdong Prov Key Lab Aerosp Commun & Networking, Shenzhen 518055, Peoples R China
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[4] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 22期
基金
中国国家自然科学基金;
关键词
Servers; Cloud computing; Processor scheduling; Collaboration; Vehicle-to-infrastructure; Edge computing; Vehicular ad hoc networks; Resource management; Deep reinforcement learning; Decision making; Deep reinforcement learning (DRL); mobile-edge computing (MEC); task offloading; vehicular edge network (VEN); ALLOCATION;
D O I
10.1109/JIOT.2024.3472472
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile-edge computing (MEC) is a promising computing scheme to support computation-intensive AI applications in vehicular networks, by enabling vehicles to offload computation tasks to edge computing servers deployed on road side units (RSUs) that approximate to them. In this work, we consider an MEC-enabled vehicular edge network (VEN), where each vehicle can offload tasks to edge/cloud computing servers via vehicle-to-infrastructure (V2I) links or to other end-vehicles via vehicle-to-vehicle (V2V) links. In such a cloud-edge-end collaborative offloading scenario, we focus on the joint task offloading, scheduling, and resource allocation problem for vehicles, which is challenging due to the online and asynchronous decision-making requirement for each task. To solve the problem, we propose a Multilayer deep reinforcement learning (DRL)-based approach, where each vehicle constructs and trains three modules to make different layers' decisions: 1) Offloading Module (first layer), determining whether to offload each task, by using the dueling and double deep Q-network (D3QN) framework; 2) Scheduling Module (second layer), determining where and how to offload each task in the offloading queues, together with the transmission power, by using the parameterized deep Q-network (PDQN) framework; and 3) Computing Module (third layer), determining how much computing resource to be allocated for each task in the computation queues, by using classic optimization techniques. We provide the detailed algorithm design and perform extensive simulations to evaluate its performance. Simulation results show that our proposed algorithm outperforms the existing algorithms in the literature, and can reduce the average cost by 25.86%-72.51% and increase the average satisfaction rate by 3.48%-90.53%.
引用
收藏
页码:36272 / 36290
页数:19
相关论文
共 50 条
  • [21] Task Offloading in Cloud-Edge Collaborative Environment Based on Deep Reinforcement Learning and Fuzzy Logic
    Wu, Xiaojun
    Wang, Lulu
    Yuan, Sheng
    Chai, Wei
    2024 IEEE 4TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND ARTIFICIAL INTELLIGENCE, SEAI 2024, 2024, : 301 - 308
  • [22] Blockchain-Secured Task Offloading and Resource Allocation for Cloud-Edge-End Cooperative Networks
    Fan, Wenhao
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (08) : 8092 - 8110
  • [23] CSO-DRL: A Collaborative Service Offloading Approach with Deep Reinforcement Learning in Vehicular Edge Computing
    Huang, Yuze
    Cao, Yuhui
    Zhang, Miao
    Feng, Beipeng
    Guo, Zhenzhen
    SCIENTIFIC PROGRAMMING, 2022, 2022
  • [24] Load balance -aware dynamic cloud-edge-end collaborative offloading strategy
    Fan, Yueqi
    PLOS ONE, 2024, 19 (01):
  • [25] Deep Reinforcement Learning-Guided Task Reverse Offloading in Vehicular Edge Computing
    Gu, Anqi
    Wu, Huaming
    Tang, Huijun
    Tang, Chaogang
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2200 - 2205
  • [26] Adaptive Prioritization and Task Offloading in Vehicular Edge Computing Through Deep Reinforcement Learning
    Uddin, Ashab
    Sakr, Ahmed Hamdi
    Zhang, Ning
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (03) : 5038 - 5052
  • [27] A collaborative computation and dependency-aware task offloading method for vehicular edge computing: a reinforcement learning approach
    Liu, Guozhi
    Dai, Fei
    Huang, Bi
    Qiang, Zhenping
    Wang, Shuai
    Li, Lecheng
    JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2022, 11 (01):
  • [28] A collaborative computation and dependency-aware task offloading method for vehicular edge computing: a reinforcement learning approach
    Guozhi Liu
    Fei Dai
    Bi Huang
    Zhenping Qiang
    Shuai Wang
    Lecheng Li
    Journal of Cloud Computing, 11
  • [29] Task Decomposition and Hierarchical Scheduling for Collaborative Cloud-Edge-End Computing
    Cai, Jun
    Liu, Wei
    Huang, Zhongwei
    Yu, Fei Richard
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (06) : 4368 - 4382
  • [30] Deep Reinforcement Learning for Task Offloading in Edge Computing
    Xie, Bo
    Cui, Haixia
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 250 - 254