EDITORS: Energy-aware Dynamic Task Offloading using Deep Reinforcement Transfer Learning in SDN-enabled Edge Nodes

被引:1
|
作者
Baker, Thar [1 ]
Al Aghbari, Zaher [2 ]
Khedr, Ahmed M. [2 ]
Ahmed, Naveed [2 ]
Girija, Shini [2 ]
机构
[1] Univ Brighton, Sch Architecture Technol & Engn, Brighton BN2 4GJ, England
[2] Univ Sharjah, Dept Comp Sci, Sharjah 27272, U Arab Emirates
关键词
Dynamic task offloading; Trust; Energy efficiency; Deep Reinforcement Transfer Learning; Software defined networks; RESOURCE-ALLOCATION; SECURITY; FOG;
D O I
10.1016/j.iot.2024.101118
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In mobile edge computing systems, a task offloading approach should balance efficiency, adaptability, trust management, and reliability. This approach aims to maximise resource utilisation, improve user experience, and satisfy application-specific requirements while taking into account the dynamic and limited resource nature of edge environments. Additionally, while offloading tasks, these systems are vulnerable to several attacks and privacy breaches, necessitating edge node trust evaluation. However, not all of these necessary features are present in the offloading methods currently used. This research proposes 'EDITORS' (energyaware dynamic task offloading method utilising Deep Reinforcement Transfer Learning (DRTL) in Software-Defined Network (SDN) enabled edge computing environments), a novel approach aimed at addressing the multifaceted issues associated with offloading in mobile edge computing systems. The primary goal of EDITORS is to design a task offloading system that incorporates trusted edge nodes while prioritising energy efficiency, timeliness, reliability, adaptability, and outperforming existing task offloading methods in terms of the quality of the task offloading plan. This method uses of DRTL agents at edge nodes, which communicate with the SDN controller to learn the most appropriate offloading choices based on network conditions and resource availability. Extensive simulations (six) are conducted which show that the EDITORS significantly increases energy efficiency while preserving low-latency task completion compared to five existing offloading methods (DDLO, DROO, DMRO, DRL without TL and SDN and DRL with SDN). EDITORS includes trust evaluation, trusted edge device prediction using LSTM, and adaptation of newly added devices through transfer learning, unlike other task offloading methods that just concentrate on task offloading.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Energy-aware task scheduling and offloading using deep reinforcement learning in SDN-enabled IoT network
    Sellami, Bassem
    Hakiri, Akram
    Ben Yahia, Sadok
    Berthou, Pascal
    [J]. COMPUTER NETWORKS, 2022, 210
  • [2] Energy-aware task scheduling and offloading using deep reinforcement learning in SDN-enabled IoT network
    Sellami, Bassem
    Hakiri, Akram
    Yahia, Sadok Ben
    Berthou, Pascal
    [J]. Computer Networks, 2022, 210
  • [3] Deep Reinforcement Learning for energy-aware task offloading in join SDN-Blockchain 5G massive IoT edge network
    Sellami, Bassem
    Hakiri, Akram
    Ben Yahia, Sadok
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 137 : 363 - 379
  • [4] A novel energy-aware routing mechanism for SDN-enabled WSAN
    Al-Hubaishi, Mohammed
    Ceken, Celal
    Al-Shaikhli, Ali
    [J]. INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2019, 32 (17)
  • [5] Load-aware dynamic controller placement based on deep reinforcement learning in SDN-enabled mobile cloud-edge computing networks
    Xu, Chenglin
    Xu, Cheng
    Li, Bo
    Li, Siqi
    Li, Tao
    [J]. COMPUTER NETWORKS, 2023, 234
  • [6] Efficient deep-reinforcement learning aware resource allocation in SDN-enabled fog paradigm
    Abdullah Lakhan
    Mazin Abed Mohammed
    Omar Ibrahim Obaid
    Chinmay Chakraborty
    Karrar Hameed Abdulkareem
    Seifedine Kadry
    [J]. Automated Software Engineering, 2022, 29
  • [7] Efficient deep-reinforcement learning aware resource allocation in SDN-enabled fog paradigm
    Lakhan, Abdullah
    Mohammed, Mazin Abed
    Obaid, Omar Ibrahim
    Chakraborty, Chinmay
    Abdulkareem, Karrar Hameed
    Kadry, Seifedine
    [J]. AUTOMATED SOFTWARE ENGINEERING, 2022, 29 (01)
  • [8] Deep Reinforcement Learning Empowers Wireless Powered Mobile Edge Computing: Towards Energy-Aware Online Offloading
    Jiao, Xianlong
    Wang, Yating
    Guo, Songtao
    Zhang, Hong
    Dai, Haipeng
    Li, Mingyan
    Zhou, Pengzhan
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (09) : 5214 - 5227
  • [9] Security-Aware Task Offloading Using Deep Reinforcement Learning in Mobile Edge Computing Systems
    Lu, Haodong
    He, Xiaoming
    Zhang, Dengyin
    [J]. ELECTRONICS, 2024, 13 (15)
  • [10] Dependency-Aware Dynamic Task Offloading Based on Deep Reinforcement Learning in Mobile-Edge Computing
    Fang, Juan
    Qu, Dezheng
    Chen, Huijie
    Liu, Yaqi
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (02): : 1403 - 1415