An approach for Offloading Divisible Tasks Using Double Deep Reinforcement Learning in Mobile Edge Computing Environment

被引:0
|
作者
Kabdjou, Joelle [1 ]
Shinomiya, Norihiko [1 ]
机构
[1] Soka Univ, Grad Sch Sci & Engn, Tokyo, Japan
关键词
Mobile Edge Computing (MEC); task offloading; double deep reinforcement learning; Markov Decision Process (MDP); Quality of Physical Experience (QoPE); SQ-PER (Self-adaptive Q-network with Prioritized experience Replay) algorithm; RESOURCE-ALLOCATION; NETWORKS;
D O I
10.1109/ITC-CSCC62988.2024.10628259
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Mobile Edge Computing (MEC) revolutionizes computing by decentralizing resources nearer to end-users, facilitating efficient task offloading to MEC servers, and addressing latency and network congestion. To tackle security challenges, we introduce a novel double deep reinforcement learning strategy for divisible task offloading in MEC setups. Our approach involves assessing offloading security levels based on task-source distances, creating a unique MEC state framework, and implementing dynamic task division for parallel execution across multiple nodes. By modeling task offloading through Markov Decision Process (MDP), we optimize Quality of Physical Experience (QoPE), considering time delays, energy usage, and security concerns. The proposed SQ-PER algorithm, integrating a self-adaptive Q-network with prioritized experience replay based on Double Deep Q-Network (DDQN), boosts learning efficiency and stability. Simulation outcomes underscore substantial reductions in time delay, task energy consumption, and offloading security risks achieved with the SQ-PER algorithm.
引用
收藏
页数:6
相关论文
共 50 条
  • [11] A Computing Offloading Resource Allocation Scheme Using Deep Reinforcement Learning in Mobile Edge Computing Systems
    Li, Xuezhu
    JOURNAL OF GRID COMPUTING, 2021, 19 (03)
  • [12] Online computation offloading with double reinforcement learning algorithm in mobile edge computing
    Liao, Linbo
    Lai, Yongxuan
    Yang, Fan
    Zeng, Wenhua
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2023, 171 : 28 - 39
  • [13] Multiple Workflows Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing
    Gao, Yongqiang
    Wang, Yanping
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT I, 2022, 13155 : 476 - 493
  • [14] Task graph offloading via deep reinforcement learning in mobile edge computing
    Liu, Jiagang
    Mi, Yun
    Zhang, Xinyu
    Li, Xiaocui
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 158 : 545 - 555
  • [15] Research on Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing
    Lu H.
    Gu C.
    Luo F.
    Ding W.
    Yang T.
    Zheng S.
    Gu, Chunhua (chgu@ecust.edu.cn), 1600, Science Press (57): : 1539 - 1554
  • [16] Maritime mobile edge computing offloading method based on deep reinforcement learning
    Su X.
    Meng L.
    Zhou Y.
    Celimuge W.
    Tongxin Xuebao/Journal on Communications, 2022, 43 (10): : 133 - 145
  • [17] Task Offloading Optimization in Mobile Edge Computing based on Deep Reinforcement Learning
    Silva, Carlos
    Magaia, Naercio
    Grilo, Antonio
    PROCEEDINGS OF THE INT'L ACM CONFERENCE ON MODELING, ANALYSIS AND SIMULATION OF WIRELESS AND MOBILE SYSTEMS, MSWIM 2023, 2023, : 109 - 118
  • [18] A Clustering Offloading Decision Method for Edge Computing Tasks Based on Deep Reinforcement Learning
    Zhen Zhang
    Huanzhou Li
    Zhangguo Tang
    Dinglin Gu
    Jian Zhang
    New Generation Computing, 2023, 41 : 85 - 108
  • [19] A Clustering Offloading Decision Method for Edge Computing Tasks Based on Deep Reinforcement Learning
    Zhang, Zhen
    Li, Huanzhou
    Tang, Zhangguo
    Gu, Dinglin
    Zhang, Jian
    NEW GENERATION COMPUTING, 2023, 41 (01) : 85 - 108
  • [20] Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach
    Zhao Chen
    Xiaodong Wang
    EURASIP Journal on Wireless Communications and Networking, 2020