An approach for Offloading Divisible Tasks Using Double Deep Reinforcement Learning in Mobile Edge Computing Environment

被引:0
|
作者
Kabdjou, Joelle [1 ]
Shinomiya, Norihiko [1 ]
机构
[1] Soka Univ, Grad Sch Sci & Engn, Tokyo, Japan
关键词
Mobile Edge Computing (MEC); task offloading; double deep reinforcement learning; Markov Decision Process (MDP); Quality of Physical Experience (QoPE); SQ-PER (Self-adaptive Q-network with Prioritized experience Replay) algorithm; RESOURCE-ALLOCATION; NETWORKS;
D O I
10.1109/ITC-CSCC62988.2024.10628259
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Mobile Edge Computing (MEC) revolutionizes computing by decentralizing resources nearer to end-users, facilitating efficient task offloading to MEC servers, and addressing latency and network congestion. To tackle security challenges, we introduce a novel double deep reinforcement learning strategy for divisible task offloading in MEC setups. Our approach involves assessing offloading security levels based on task-source distances, creating a unique MEC state framework, and implementing dynamic task division for parallel execution across multiple nodes. By modeling task offloading through Markov Decision Process (MDP), we optimize Quality of Physical Experience (QoPE), considering time delays, energy usage, and security concerns. The proposed SQ-PER algorithm, integrating a self-adaptive Q-network with prioritized experience replay based on Double Deep Q-Network (DDQN), boosts learning efficiency and stability. Simulation outcomes underscore substantial reductions in time delay, task energy consumption, and offloading security risks achieved with the SQ-PER algorithm.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Deep reinforcement learning for computation offloading in mobile edge computing environment
    Chen, Miaojiang
    Wang, Tian
    Zhang, Shaobo
    Liu, Anfeng
    COMPUTER COMMUNICATIONS, 2021, 175 (175) : 1 - 12
  • [2] A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing
    Wang, Qing
    Tan, Wenan
    Qin, Xiaofan
    HUMAN CENTERED COMPUTING, 2019, 11956 : 419 - 430
  • [3] Divisible Task Offloading for Multiuser Multiserver Mobile Edge Computing Systems Based on Deep Reinforcement Learning
    Tang, Lin
    Qin, Hang
    IEEE ACCESS, 2023, 11 : 83507 - 83522
  • [4] A Deep Reinforcement Learning Approach for Online Computation Offloading in Mobile Edge Computing
    Zhang, Yameng
    Liu, Tong
    Zhu, Yanmin
    Yang, Yuanyuan
    2020 IEEE/ACM 28TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 2020,
  • [5] Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems
    Tang, Ming
    Wong, Vincent W. S.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) : 1985 - 1997
  • [6] Joint Offloading and Resource Allocation Using Deep Reinforcement Learning in Mobile Edge Computing
    Zhang, Xinjie
    Zhang, Xinglin
    Yang, Wentao
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (05): : 3454 - 3466
  • [7] Computation Offloading for Mobile Edge Computing: A Deep Learning Approach
    Yu, Shuai
    Wang, Xin
    Langar, Rami
    2017 IEEE 28TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR, AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2017,
  • [8] Privacy-preserving task offloading in mobile edge computing: A deep reinforcement learning approach
    Xia, Fanglue
    Chen, Ying
    Huang, Jiwei
    SOFTWARE-PRACTICE & EXPERIENCE, 2024, 54 (09): : 1774 - 1792
  • [9] Wireless Power Assisted Computation Offloading in Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Maray, Mohammed
    Mustafa, Ehzaz
    Shuja, Junaid
    HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2024, 14
  • [10] A Computing Offloading Resource Allocation Scheme Using Deep Reinforcement Learning in Mobile Edge Computing Systems
    Xuezhu Li
    Journal of Grid Computing, 2021, 19