Deep Reinforcement Learning Empowered Resource Allocation in Vehicular Fog Computing

被引:1
|
作者
Sun, Lijun [1 ]
Liu, Mingzhi [2 ]
Guo, Jiachen [1 ]
Yu, Xu [3 ]
Wang, Shangguang [4 ]
机构
[1] Qingdao Univ Sci & Technol, Coll Informat Sci & Technol, Qingdao 266101, Peoples R China
[2] Ocean Univ China, Qingdao 266005, Peoples R China
[3] China Univ Petr, Qingdao Inst Software, Qingdao 266005, Peoples R China
[4] Beijing Univ Posts & Telecommun, Sch Comp Sci, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
Resource management; Task analysis; Adaptation models; Vehicle dynamics; Computer architecture; Trajectory; Dynamic scheduling; 6G; deep reinforcement learning (DRL); Internet of vehicles (IoV); resource allocation; vehicular fog computing (VFC);
D O I
10.1109/TVT.2023.3338578
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recent advances in fog computing had significantly impacted the development of the Internet of Vehicles (IoV). Rapidly growing on-vehicle applications demand low-latency computing, enormously pressuring fog servers. Vehicular fog computing (VFC) can relieve the pressure of fog servers by utilizing the idle resources of neighbor vehicles to complete on-vehicle application tasks, especially in the near future 6G environment. However, the mobility of the vehicle adds tremendous complexity to the allocation of on-vehicular resources. So in the dynamic vehicular network, accurately learning about the vehicle user's real-time demand and timely allocating idle resources of neighbor vehicles are the keys to the optimal allocation of resources. This paper proposes a three-layer VFC cooperation architecture to achieve cooperation between vehicles, which can dynamically coordinate resource allocation for IoV. The VFC cooperation architecture predicts traffic flow through the deep learning (DL) method, to intelligently estimate the number of vehicle resources and tasks. Further, the deep reinforcement learning (DRL) method is used to dynamically and adaptively decide the matching time between the vehicle resources and tasks, so as to maximize the success rate of vehicle resource allocation. Finally, experiments show that our method improves the matching benefits by nearly 1.2 times compared with the baseline methods.
引用
收藏
页码:7066 / 7076
页数:11
相关论文
共 50 条
  • [1] Poster Abstract: Deep Reinforcement Learning-based Resource Allocation in Vehicular Fog Computing
    Lee, Seung-seob
    Lee, Sukyoung
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM 2019 WKSHPS), 2019, : 1029 - 1030
  • [2] Deep Reinforcement Learning for Joint Offloading and Resource Allocation in Fog Computing
    Bai, Wenle
    Qian, Cheng
    [J]. PROCEEDINGS OF 2021 IEEE 12TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND SERVICE SCIENCE (ICSESS), 2021, : 131 - 134
  • [3] Resource Allocation for Vehicular Fog Computing Using Reinforcement Learning Combined With Heuristic Information
    Lee, Seung-seob
    Lee, SuKyoung
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10): : 10450 - 10464
  • [4] Resource Allocation for Vehicular Fog Computing Using Reinforcement Learning Combined with Heuristic Information
    [J]. Lee, Sukyoung (sklee@yonsei.ac.kr), 1600, Institute of Electrical and Electronics Engineers Inc., United States (07):
  • [5] Federated Deep Reinforcement Learning-Based Task Allocation in Vehicular Fog Computing
    Shi, Jinming
    Du, Jun
    Wang, Jian
    Yuan, Jian
    [J]. 2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [6] Contract-Based Computing Resource Management via Deep Reinforcement Learning in Vehicular Fog Computing
    Zhao, Junhui
    Kong, Ming
    Li, Qiuping
    Sun, Xiaoke
    [J]. IEEE ACCESS, 2020, 8 : 3319 - 3329
  • [7] Resource Provisioning in Fog Computing through Deep Reinforcement Learning
    Santos, Jose
    Wauters, Tim
    Volckaert, Bruno
    De Turck, Filip
    [J]. 2021 IFIP/IEEE INTERNATIONAL SYMPOSIUM ON INTEGRATED NETWORK MANAGEMENT (IM 2021), 2021, : 431 - 437
  • [8] Fast Adaptive Task Offloading and Resource Allocation via Multiagent Reinforcement Learning in Heterogeneous Vehicular Fog Computing
    Gao, Zhen
    Yang, Lei
    Dai, Yu
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (08) : 6818 - 6835
  • [9] Asynchronous Deep Reinforcement Learning for Collaborative Task Computing and On-Demand Resource Allocation in Vehicular Edge Computing
    Liu L.
    Feng J.
    Mu X.
    Pei Q.
    Lan D.
    Xiao M.
    [J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24 (12) : 15513 - 15526
  • [10] Vehicular Fog Resource Allocation Approach for VANETs Based on Deep Adaptive Reinforcement Learning Combined with Heuristic Information
    Cheng, Yunli
    Vijayaraj, A.
    Sree Pokkuluri, Kiran
    Salehnia, Taybeh
    Montazerolghaem, Ahmadreza
    Rateb, Roqia
    [J]. IEEE Access, 2024, 12 : 139056 - 139075