Computation Migration and Resource Allocation in Heterogeneous Vehicular Networks: A Deep Reinforcement Learning Approach

被引:16
|
作者
Wang, Hui [1 ]
Ke, Hongchang [2 ,3 ,4 ]
Liu, Gang [1 ]
Sun, Weijia [1 ]
机构
[1] Changchun Univ Technol, Coll Comp Sci & Engn, Changchun 130012, Peoples R China
[2] Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Peoples R China
[3] Changchun Inst Technol, Sch Comp Technol & Engn, Changchun 130012, Peoples R China
[4] Jilin Univ, Key Lab Symbol Computat & Knowledge Engn, Minist Educ, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Servers; Resource management; Task analysis; Delays; Computational modeling; Base stations; Edge computing; Vehicular networks; mobile edge computing; reinforcement learning; computation migration; MOBILE; AWARE; MEC;
D O I
10.1109/ACCESS.2020.3024683
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the development of 5G technology, the requirements for data communication and computation in emerging 5G-enabled vehicular networks are becoming increasingly stringent. Computation-intensive or delay-sensitive tasks generated by vehicles need to be processed in real time. Mobile edge computing (MEC) is an appropriate solution. Wireless users or vehicles can offload computation tasks to the MEC server due to it has strong computation ability and is closer to the wireless users or vehicles. However, the communication and computation resources of the single MEC are not sufficient for executing the continuously generated computation-intensive or delay-sensitive tasks. We consider migrating computation tasks to other MEC servers to reduce the computation and communication pressure on current MEC server. In this article, we construct an MEC-based computation offloading framework for vehicular networks, which considers time-varying channel states and stochastically arriving computation tasks. To minimize the total cost of the proposed MEC framework, which consists of the delay cost, energy computation cost, and bandwidth cost, we propose a deep reinforcement learning-based computation migration and resource allocation (RLCMRA) scheme that requires no prior knowledge. The RLCMRA algorithm can obtain the optimal offloading and migration policy by adaptive learning to maximize the average cumulative reward (minimize the total cost). Extensive numerical results show that the proposed RLCMRA algorithm can adaptively learn the optimal policy and outperform four other baseline algorithms.
引用
收藏
页码:171140 / 171153
页数:14
相关论文
共 50 条
  • [1] Resource Allocation in MEC-enabled Vehicular Networks: A Deep Reinforcement Learning Approach
    Tan, Guoping
    Zhang, Huipeng
    Zhou, Siyuan
    IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2020, : 406 - 411
  • [2] Deep Reinforcement Learning Based Resource Allocation for Heterogeneous Networks
    Yang, Helin
    Zhao, Jun
    Lam, Kwok-Yan
    Garg, Sahil
    Wu, Qingqing
    Xiong, Zehui
    2021 17TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS (WIMOB 2021), 2021, : 253 - 258
  • [3] Multiagent Deep-Reinforcement-Learning-Based Resource Allocation for Heterogeneous QoS Guarantees for Vehicular Networks
    Tian, Jie
    Liu, Qianqian
    Zhang, Haixia
    Wu, Dalei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03): : 1683 - 1695
  • [4] Deep Reinforcement Learning for Resource Allocation in Multi-platoon Vehicular Networks
    Xu, Hu
    Ji, Jiequ
    Zhu, Kun
    Wang, Ran
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT II, 2021, 12938 : 402 - 416
  • [5] Deep Reinforcement Learning Framework for Joint Resource Allocation in Heterogeneous Networks
    Zhang, Yong
    Kang, Canping
    Teng, YingLei
    Li, Sisi
    Zheng, WeiJun
    Fang, JingHui
    2019 IEEE 90TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2019-FALL), 2019,
  • [6] Deep Reinforcement Learning for User Association and Resource Allocation in Heterogeneous Networks
    Zhao, Nan
    Liang, Ying-Chang
    Niyato, Dusit
    Pei, Yiyang
    Jiang, Yunhao
    2018 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2018,
  • [7] Task Offloading and Resource Allocation in Vehicular Networks: A Lyapunov-Based Deep Reinforcement Learning Approach
    Kumar, Anitha Saravana
    Zhao, Lian
    Fernando, Xavier
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (10) : 13360 - 13373
  • [8] Computation Offloading and Resource Allocation in Satellite-Terrestrial Integrated Networks: A Deep Reinforcement Learning Approach
    Xie, Junfeng
    Jia, Qingmin
    Chen, Youxing
    Wang, Wei
    IEEE ACCESS, 2024, 12 : 97184 - 97195
  • [9] Deep Reinforcement Learning for User Association and Resource Allocation in Heterogeneous Cellular Networks
    Zhao, Nan
    Liang, Ying-Chang
    Niyato, Dusit
    Pei, Yiyang
    Wu, Minghu
    Jiang, Yunhao
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2019, 18 (11) : 5141 - 5152
  • [10] Deep Reinforcement Learning-Based Adaptive Computation Offloading for MEC in Heterogeneous Vehicular Networks
    Ke, Hongchang
    Wang, Jian
    Deng, Lingyue
    Ge, Yuming
    Wang, Hui
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (07) : 7916 - 7929