Computation Migration and Resource Allocation in Heterogeneous Vehicular Networks: A Deep Reinforcement Learning Approach

被引:16
|
作者
Wang, Hui [1 ]
Ke, Hongchang [2 ,3 ,4 ]
Liu, Gang [1 ]
Sun, Weijia [1 ]
机构
[1] Changchun Univ Technol, Coll Comp Sci & Engn, Changchun 130012, Peoples R China
[2] Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Peoples R China
[3] Changchun Inst Technol, Sch Comp Technol & Engn, Changchun 130012, Peoples R China
[4] Jilin Univ, Key Lab Symbol Computat & Knowledge Engn, Minist Educ, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Servers; Resource management; Task analysis; Delays; Computational modeling; Base stations; Edge computing; Vehicular networks; mobile edge computing; reinforcement learning; computation migration; MOBILE; AWARE; MEC;
D O I
10.1109/ACCESS.2020.3024683
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the development of 5G technology, the requirements for data communication and computation in emerging 5G-enabled vehicular networks are becoming increasingly stringent. Computation-intensive or delay-sensitive tasks generated by vehicles need to be processed in real time. Mobile edge computing (MEC) is an appropriate solution. Wireless users or vehicles can offload computation tasks to the MEC server due to it has strong computation ability and is closer to the wireless users or vehicles. However, the communication and computation resources of the single MEC are not sufficient for executing the continuously generated computation-intensive or delay-sensitive tasks. We consider migrating computation tasks to other MEC servers to reduce the computation and communication pressure on current MEC server. In this article, we construct an MEC-based computation offloading framework for vehicular networks, which considers time-varying channel states and stochastically arriving computation tasks. To minimize the total cost of the proposed MEC framework, which consists of the delay cost, energy computation cost, and bandwidth cost, we propose a deep reinforcement learning-based computation migration and resource allocation (RLCMRA) scheme that requires no prior knowledge. The RLCMRA algorithm can obtain the optimal offloading and migration policy by adaptive learning to maximize the average cumulative reward (minimize the total cost). Extensive numerical results show that the proposed RLCMRA algorithm can adaptively learn the optimal policy and outperform four other baseline algorithms.
引用
收藏
页码:171140 / 171153
页数:14
相关论文
共 50 条
  • [21] Collaborative Computation Offloading and Resource Allocation in Multi-UAV-Assisted IoT Networks: A Deep Reinforcement Learning Approach
    Seid, Abegaz Mohammed
    Boateng, Gordon Owusu
    Anokye, Stephen
    Kwantwi, Thomas
    Sun, Guolin
    Liu, Guisong
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (15) : 12203 - 12218
  • [22] Resource allocation strategy for vehicular communication networks based on multi-agent deep reinforcement learning
    Liu, Zhibin
    Deng, Yifei
    VEHICULAR COMMUNICATIONS, 2025, 53
  • [23] Computation offloading and resource allocation strategy based on deep reinforcement learning
    Zeng F.
    Zhang Z.
    Chen Z.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (07): : 124 - 135
  • [24] Deep Reinforcement Learning based Computation Offloading and Resource Allocation for MEC
    Li, Ji
    Gao, Hui
    Lv, Tiejun
    Lu, Yueming
    2018 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2018,
  • [25] Decentralized Computation Offloading and Resource Allocation in MEC by Deep Reinforcement Learning
    Liang, Yeteng
    He, Yejun
    Zhong, Xiaoxu
    2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 244 - 249
  • [26] Deep Reinforcement Learning Aided Computation Offloading and Resource Allocation for IoT
    Gong, Yongkang
    Wang, Jingjing
    Nie, Tianzheng
    2020 IEEE COMPUTING, COMMUNICATIONS AND IOT APPLICATIONS (COMCOMAP), 2021,
  • [27] Computation Offloading and Resource Allocation in F-RANs: A Federated Deep Reinforcement Learning Approach
    Zhang, Lingling
    Jiang, Yanxiang
    Zheng, Fu-Chun
    Bennis, Mehdi
    You, Xiaohu
    2022 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2022, : 97 - 102
  • [28] Resource Allocation for Heterogeneous Service in Green Mobile Edge Networks Using Deep Reinforcement Learning
    Sun, Si-yuan
    Zheng, Ying
    Zhou, Jun-hua
    Weng, Jiu-xing
    Wei, Yi-fei
    Wang, Xiao-jun
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (07): : 2496 - 2512
  • [29] Resource Allocation in Mobility-Aware Federated Learning Networks: A Deep Reinforcement Learning Approach
    Nguyen, Huy T.
    Luong, Nguyen Cong
    Zhao, Jun
    Yuen, Chau
    Niyato, Dusit
    2020 IEEE 6TH WORLD FORUM ON INTERNET OF THINGS (WF-IOT), 2020,
  • [30] Deep Reinforcement Learning-Based Adaptive Computation Offloading and Power Allocation in Vehicular Edge Computing Networks
    Qiu, Bin
    Wang, Yunxiao
    Xiao, Hailin
    Zhang, Zhongshan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 13339 - 13349