A Deep-Reinforcement-Learning-Based Computation Offloading With Mobile Vehicles in Vehicular Edge Computing

被引:6
|
作者
Lin, Jie [1 ]
Huang, Siqi [2 ]
Zhang, Hanlin [3 ]
Yang, Xinyu [1 ]
Zhao, Peng [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Comp Sci & Technol, Xian 710049, Peoples R China
[2] Xi An Jiao Tong Univ, Sch Software Engn, Xian 710049, Peoples R China
[3] Qingdao Univ, Coll Comp Sci & Technol, Qingdao 266071, Peoples R China
关键词
Task analysis; Servers; Mobile handsets; Iron; Quality of experience; Computational modeling; Reinforcement learning; Computation offloading; Deep-Reinforcement-Learning; mobile edge servers (MESs); vehicle-to-vehicle (V2V) communications; vehicular edge networks; INTERNET; EFFICIENT; NETWORKS; PATH; GAME; IOT;
D O I
10.1109/JIOT.2023.3264281
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vehicular edge networks involve edge servers that are close to mobile devices to provide extra computation resource to complete the computation tasks of mobile devices with low latency and high reliability. Considerable efforts on computation offloading in vehicular edge networks have been developed to reduce the energy consumption and computation latency, in which roadside units (RSUs) are usually considered as the fixed edge servers (FESs). Nonetheless, the computation offloading with considering mobile vehicles as mobile edge servers (MESs) in vehicular edge networks still needs to be further investigated. To this end, in this article, we propose a Deep-Reinforcement-Learning-based computation offloading with mobile vehicles in vehicular edge computing, namely, Deep-Reinforcement-Learning-based computation offloading scheme (DRL-COMV), in which some vehicles (such as autonomous vehicle) are deployed and considered as the MESs that move in vehicular edge networks and cooperate with FESs to provide extra computation resource for mobile devices, in order to assist in completing the computation tasks of these mobile devices with great Quality of Experience (QoE) (i.e., low latency) for mobile devices. Particularly, the computation offloading model with considering both mobile and FESs is conducted to achieve the computation tasks offloading through vehicle-to-vehicle (V2V) communications, and a collaborative route planning is considered for these MESs to move in vehicular edge networks with objective of improving efficiency of computation offloading. Then, a Deep-Reinforcement-Learning approach with designing rational reward function is proposed to determine the effective computation offloading strategies for multiple mobile devices and multiple edge servers with objective of maximizing both QoE (i.e., low latency) for mobile devices. Through performance evaluations, our results show that our proposed DRL-COMV scheme can achieve a great convergence and stability. Additionally, our results also demonstrate that our DRL-COMV scheme also can achieve better both QoE and task offloading requests hit ratio for mobile devices in comparison with existing approaches (i.e., DDPG, IMOPSOQ, and GABDOS).
引用
收藏
页码:15501 / 15514
页数:14
相关论文
共 50 条
  • [1] Deep-Reinforcement-Learning-Based Distributed Computation Offloading in Vehicular Edge Computing Networks
    Geng, Liwei
    Zhao, Hongbo
    Wang, Jiayue
    Kaushik, Aryan
    Yuan, Shuai
    Feng, Wenquan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (14) : 12416 - 12433
  • [2] Deep-Reinforcement-Learning-Based Offloading Scheduling for Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Wang, Chao
    Min, Geyong
    Duan, Hancong
    Zhu, Qingxin
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (06): : 5449 - 5465
  • [3] Deep-Reinforcement-Learning-Based Computation Offloading in UAV-Assisted Vehicular Edge Computing Networks
    Yan, Junjie
    Zhao, Xiaohui
    Li, Zan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19882 - 19897
  • [4] Deep Reinforcement Learning-Based Computation Offloading in Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Min, Geyong
    Duan, Hancong
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [5] Deep reinforcement learning for computation offloading in mobile edge computing environment
    Chen, Miaojiang
    Wang, Tian
    Zhang, Shaobo
    Liu, Anfeng
    [J]. COMPUTER COMMUNICATIONS, 2021, 175 : 1 - 12
  • [6] Computation Offloading in Edge Computing Based on Deep Reinforcement Learning
    Li, MingChu
    Mao, Ning
    Zheng, Xiao
    Gadekallu, Thippa Reddy
    [J]. PROCEEDINGS OF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION NETWORKS (ICCCN 2021), 2022, 394 : 339 - 353
  • [7] Task Offloading Decision-Making Algorithm for Vehicular Edge Computing: A Deep-Reinforcement-Learning-Based Approach
    Shi, Wei
    Chen, Long
    Zhu, Xia
    [J]. SENSORS, 2023, 23 (17)
  • [8] A Deep Reinforcement Learning Approach for Online Computation Offloading in Mobile Edge Computing
    Zhang, Yameng
    Liu, Tong
    Zhu, Yanmin
    Yang, Yuanyuan
    [J]. 2020 IEEE/ACM 28TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 2020,
  • [9] A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing
    Wang, Qing
    Tan, Wenan
    Qin, Xiaofan
    [J]. HUMAN CENTERED COMPUTING, 2019, 11956 : 419 - 430
  • [10] Computation offloading strategy based on deep reinforcement learning for connected and autonomous vehicle in vehicular edge computing
    Lin, Bing
    Lin, Kai
    Lin, Changhang
    Lu, Yu
    Huang, Ziqing
    Chen, Xinwei
    [J]. JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2021, 10 (01):