An approach for Offloading Divisible Tasks Using Double Deep Reinforcement Learning in Mobile Edge Computing Environment

被引:0
|
作者
Kabdjou, Joelle [1 ]
Shinomiya, Norihiko [1 ]
机构
[1] Soka Univ, Grad Sch Sci & Engn, Tokyo, Japan
关键词
Mobile Edge Computing (MEC); task offloading; double deep reinforcement learning; Markov Decision Process (MDP); Quality of Physical Experience (QoPE); SQ-PER (Self-adaptive Q-network with Prioritized experience Replay) algorithm; RESOURCE-ALLOCATION; NETWORKS;
D O I
10.1109/ITC-CSCC62988.2024.10628259
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Mobile Edge Computing (MEC) revolutionizes computing by decentralizing resources nearer to end-users, facilitating efficient task offloading to MEC servers, and addressing latency and network congestion. To tackle security challenges, we introduce a novel double deep reinforcement learning strategy for divisible task offloading in MEC setups. Our approach involves assessing offloading security levels based on task-source distances, creating a unique MEC state framework, and implementing dynamic task division for parallel execution across multiple nodes. By modeling task offloading through Markov Decision Process (MDP), we optimize Quality of Physical Experience (QoPE), considering time delays, energy usage, and security concerns. The proposed SQ-PER algorithm, integrating a self-adaptive Q-network with prioritized experience replay based on Double Deep Q-Network (DDQN), boosts learning efficiency and stability. Simulation outcomes underscore substantial reductions in time delay, task energy consumption, and offloading security risks achieved with the SQ-PER algorithm.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Deep Reinforcement Learning-Based Offloading Decision Optimization in Mobile Edge Computing
    Zhang, Hao
    Wu, Wenjun
    Wang, Chaoyi
    Li, Meng
    Yang, Ruizhe
    2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2019,
  • [32] Fast and Reliable Offloading via Deep Reinforcement Learning for Mobile Edge Video Computing
    Park, Soohyun
    Kang, Yeongeun
    Tian, Yafei
    Kim, Joongheon
    2020 34TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2020), 2020, : 10 - 12
  • [33] Deep Reinforcement Learning for Backscatter-Aided Data Offloading in Mobile Edge Computing
    Gong, Shimin
    Xie, Yutong
    Xu, Jing
    Niyato, Dusit
    Liang, Ying-Chang
    IEEE NETWORK, 2020, 34 (05): : 106 - 113
  • [34] Deep Reinforcement Learning Based Offloading for Mobile Edge Computing with General Task Graph
    Yan, Jia
    Bi, Suzhi
    Huang, Liang
    Zhang, Ying-Jun Angela
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [35] Deep reinforcement learning-based dynamical task offloading for mobile edge computing
    Xie, Bo
    Cui, Haixia
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [36] Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing
    Xiaohu Gao
    Mei Choo Ang
    Sara A. Althubiti
    Journal of Grid Computing, 2023, 21
  • [37] Offloading in Mobile Edge Computing Based on Federated Reinforcement Learning
    Dai, Yu
    Xue, Qing
    Gao, Zhen
    Zhang, Qiuhong
    Yang, Lei
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [38] Energy Efficient Joint Computation Offloading and Service Caching for Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Zhou, Huan
    Zhang, Zhenyu
    Wu, Yuan
    Dong, Mianxiong
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2023, 7 (02): : 950 - 961
  • [39] Mobility-aware Tasks Offloading in Mobile Edge Computing Environment
    Wu, Chunrong
    Peng, Qinglan
    Xia, Yunni
    Lee, Jia
    2019 SEVENTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING (CANDAR 2019), 2019, : 204 - 210
  • [40] Efficient Deep Learning Approach for Computational Offloading in Mobile Edge Computing Networks
    Cheng, Xiaoliang
    Liu, Jingchun
    Jin, Zhigang
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022