Energy-efficient collaborative task offloading in multi-access edge computing based on deep reinforcement learning

被引:0
|
作者
Wang, Shudong [1 ]
Zhao, Shengzhe [1 ]
Gui, Haiyuan [1 ]
He, Xiao [1 ]
Lu, Zhi [1 ]
Chen, Baoyun [1 ]
Fan, Zixuan [1 ]
Pang, Shanchen [1 ]
机构
[1] China Univ Petr East China, Coll Comp Sci & Technol, Qingdao 266580, Peoples R China
关键词
Multi-access edge computing; Collaborative task offloading; Graph neural network; Deep reinforcement learning; Device-to-Device; RESOURCE-ALLOCATION;
D O I
10.1016/j.adhoc.2024.103743
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the multi-access edge computing (MEC), task offloading through device-to-device (D2D) communication can improve the performance of edge computing by utilizing the computational resources of nearby mobile devices (MDs). However, adapting to the time-varying wireless environment and efficiently and quickly allocating tasks to MEC and other MDs to minimize the energy consumption of MDs is a challenge. First, we constructed a multi-device collaborative task offloading framework, modeling the collaborative task offloading decision problem as a graph state transition problem and utilizing a graph neural network (GNN) to fully explore the potential relationships between MDs and MEC. Then, we proposed a collaborative task offloading algorithm based on graph reinforcement learning and introduced a penalty mechanism that imposes penalties when the tasks of MDs exceed their deadlines. Simulation results show that, compared with other benchmark algorithms, this algorithm reduces energy consumption by approximately 20%, achieves higher task completion rates, and provides a more balanced load distribution.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Online Learning in Matching Games for Task Offloading in Multi-Access Edge Computing
    Simon, Bernd
    Mehler, Helena
    Klein, Anja
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3270 - 3276
  • [22] An Online Learning Algorithm for Distributed Task Offloading in Multi-Access Edge Computing
    Sun, Zhenfeng
    Nakhai, Mohammad Reza
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 3090 - 3102
  • [23] Efficient Task Offloading in Multi-access Edge Computing Servers using Asynchronous Meta Reinforcement Learning in 5G
    Ashengo, Yeabsira Asefa
    Yahiya, Tara Ali
    Zema, Nicola Roberto
    2024 IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS, ISCC 2024, 2024,
  • [24] Computation Offloading in Multi-Access Edge Computing: A Multi-Task Learning Approach
    Yang, Bo
    Cao, Xuelin
    Bassey, Joshua
    Li, Xiangfang
    Qian, Lijun
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2021, 20 (09) : 2745 - 2762
  • [25] Graph Attention Network Reinforcement Learning Based Computation Offloading in Multi-Access Edge Computing
    Liu, Yuxuan
    Xia, Geming
    Chen, Jian
    Zhang, Danlei
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 966 - 969
  • [26] Secured Computation Offloading in Multi-Access Mobile Edge Computing Networks through Deep Reinforcement Learning
    Abdullah R.
    Yaacob N.A.
    Salameh A.A.
    Zaki N.A.M.
    Bahardin N.F.
    International Journal of Interactive Mobile Technologies, 2024, 18 (11) : 80 - 91
  • [27] Collaborative Task Offloading in Vehicular Edge Multi-Access Networks
    Qiao, Guanhua
    Leng, Supeng
    Zhang, Ke
    He, Yejun
    IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (08) : 48 - 54
  • [28] Deep Reinforcement Learning for Task Offloading in Edge Computing
    Xie, Bo
    Cui, Haixia
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 250 - 254
  • [29] Multi-agent deep reinforcement learning for collaborative task offloading in mobile edge computing networks
    Chen, Minxuan
    Guo, Aihuang
    Song, Chunlin
    DIGITAL SIGNAL PROCESSING, 2023, 140
  • [30] Deep reinforcement learning-based resource allocation in multi-access edge computing
    Khani, Mohsen
    Sadr, Mohammad Mohsen
    Jamali, Shahram
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023,