Cloud-Edge-End Collaborative Task Offloading in Vehicular Edge Networks: A Multilayer Deep Reinforcement Learning Approach

被引:0
|
作者
Wu, Jiaqi [1 ,2 ]
Tang, Ming [3 ]
Jiang, Changkun [4 ]
Gao, Lin [1 ,2 ]
Cao, Bin [1 ,2 ]
机构
[1] Harbin Inst Technol, Sch Elect & Informat Engn, Shenzhen 518055, Peoples R China
[2] Harbin Inst Technol, Guangdong Prov Key Lab Aerosp Commun & Networking, Shenzhen 518055, Peoples R China
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[4] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 22期
基金
中国国家自然科学基金;
关键词
Servers; Cloud computing; Processor scheduling; Collaboration; Vehicle-to-infrastructure; Edge computing; Vehicular ad hoc networks; Resource management; Deep reinforcement learning; Decision making; Deep reinforcement learning (DRL); mobile-edge computing (MEC); task offloading; vehicular edge network (VEN); ALLOCATION;
D O I
10.1109/JIOT.2024.3472472
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile-edge computing (MEC) is a promising computing scheme to support computation-intensive AI applications in vehicular networks, by enabling vehicles to offload computation tasks to edge computing servers deployed on road side units (RSUs) that approximate to them. In this work, we consider an MEC-enabled vehicular edge network (VEN), where each vehicle can offload tasks to edge/cloud computing servers via vehicle-to-infrastructure (V2I) links or to other end-vehicles via vehicle-to-vehicle (V2V) links. In such a cloud-edge-end collaborative offloading scenario, we focus on the joint task offloading, scheduling, and resource allocation problem for vehicles, which is challenging due to the online and asynchronous decision-making requirement for each task. To solve the problem, we propose a Multilayer deep reinforcement learning (DRL)-based approach, where each vehicle constructs and trains three modules to make different layers' decisions: 1) Offloading Module (first layer), determining whether to offload each task, by using the dueling and double deep Q-network (D3QN) framework; 2) Scheduling Module (second layer), determining where and how to offload each task in the offloading queues, together with the transmission power, by using the parameterized deep Q-network (PDQN) framework; and 3) Computing Module (third layer), determining how much computing resource to be allocated for each task in the computation queues, by using classic optimization techniques. We provide the detailed algorithm design and perform extensive simulations to evaluate its performance. Simulation results show that our proposed algorithm outperforms the existing algorithms in the literature, and can reduce the average cost by 25.86%-72.51% and increase the average satisfaction rate by 3.48%-90.53%.
引用
收藏
页码:36272 / 36290
页数:19
相关论文
共 50 条
  • [41] Cloud-Edge-End Collaborative Intelligent Service Computation Offloading: A Digital Twin Driven Edge Coalition Approach for Industrial IoT
    Li, Xiaohuan
    Chen, Bitao
    Fan, Junchuan
    Kang, Jiawen
    Ye, Jin
    Wang, Xun
    Niyato, Dusit
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (06): : 6318 - 6330
  • [42] Deep Reinforcement Learning Based Cloud-Edge Collaborative Computation Offloading Mechanism
    Chen S.-G.
    Chen J.-M.
    Zhao C.-X.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2021, 49 (01): : 157 - 166
  • [43] SRA-E-ABCO: terminal task offloading for cloud-edge-end environments
    Jiao, Shun
    Wang, Haiyan
    Luo, Jian
    JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2024, 13 (01):
  • [44] SRA-E-ABCO: terminal task offloading for cloud-edge-end environments
    Shun Jiao
    Haiyan Wang
    Jian Luo
    Journal of Cloud Computing, 13
  • [45] Edge Cloud Collaboration Serial Task Offloading Algorithm Based on Deep Reinforcement Learning
    Zhang F.-L.
    Zhao J.-J.
    Liu D.
    Wang R.-J.
    1600, Univ. of Electronic Science and Technology of China (50): : 398 - 404
  • [46] Adaptive Task Offloading in Coded Edge Computing: A Deep Reinforcement Learning Approach
    Nguyen Van Tam
    Nguyen Quang Hieu
    Nguyen Thi Thanh Van
    Nguyen Cong Luong
    Niyato, Dusit
    Kim, Dong In
    IEEE COMMUNICATIONS LETTERS, 2021, 25 (12) : 3878 - 3882
  • [47] Deep Reinforcement Learning-Based Task Offloading and Load Balancing for Vehicular Edge Computing
    Wu, Zhoupeng
    Jia, Zongpu
    Pang, Xiaoyan
    Zhao, Shan
    ELECTRONICS, 2024, 13 (08)
  • [48] Incentive Mechanism for Task Offloading and Resource Cooperation in Vehicular Edge Computing Networks: A Deep Reinforcement Learning-Assisted Contract Approach
    Zhao, Nan
    Pei, Yiyang
    Niyato, Dusit
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 41098 - 41109
  • [49] Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
    Moon, Sungwon
    Lim, Yujin
    SENSORS, 2022, 22 (24)
  • [50] Deep Reinforcement Learning-Based Cloud-Edge Collaborative Mobile Computation Offloading in Industrial Networks
    Chen, Siguang
    Chen, Jiamin
    Miao, Yifeng
    Wang, Qian
    Zhao, Chuanxin
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2022, 8 : 364 - 375