EDITORS: Energy-aware Dynamic Task Offloading using Deep Reinforcement Transfer Learning in SDN-enabled Edge Nodes

被引:1
|
作者
Baker, Thar [1 ]
Al Aghbari, Zaher [2 ]
Khedr, Ahmed M. [2 ]
Ahmed, Naveed [2 ]
Girija, Shini [2 ]
机构
[1] Univ Brighton, Sch Architecture Technol & Engn, Brighton BN2 4GJ, England
[2] Univ Sharjah, Dept Comp Sci, Sharjah 27272, U Arab Emirates
关键词
Dynamic task offloading; Trust; Energy efficiency; Deep Reinforcement Transfer Learning; Software defined networks; RESOURCE-ALLOCATION; SECURITY; FOG;
D O I
10.1016/j.iot.2024.101118
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In mobile edge computing systems, a task offloading approach should balance efficiency, adaptability, trust management, and reliability. This approach aims to maximise resource utilisation, improve user experience, and satisfy application-specific requirements while taking into account the dynamic and limited resource nature of edge environments. Additionally, while offloading tasks, these systems are vulnerable to several attacks and privacy breaches, necessitating edge node trust evaluation. However, not all of these necessary features are present in the offloading methods currently used. This research proposes 'EDITORS' (energyaware dynamic task offloading method utilising Deep Reinforcement Transfer Learning (DRTL) in Software-Defined Network (SDN) enabled edge computing environments), a novel approach aimed at addressing the multifaceted issues associated with offloading in mobile edge computing systems. The primary goal of EDITORS is to design a task offloading system that incorporates trusted edge nodes while prioritising energy efficiency, timeliness, reliability, adaptability, and outperforming existing task offloading methods in terms of the quality of the task offloading plan. This method uses of DRTL agents at edge nodes, which communicate with the SDN controller to learn the most appropriate offloading choices based on network conditions and resource availability. Extensive simulations (six) are conducted which show that the EDITORS significantly increases energy efficiency while preserving low-latency task completion compared to five existing offloading methods (DDLO, DROO, DMRO, DRL without TL and SDN and DRL with SDN). EDITORS includes trust evaluation, trusted edge device prediction using LSTM, and adaptation of newly added devices through transfer learning, unlike other task offloading methods that just concentrate on task offloading.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] Dynamic Computation Offloading with Deep Reinforcement Learning in Edge Network
    Bai, Yang
    Li, Xiaocui
    Wu, Xinfan
    Zhou, Zhangbing
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (03):
  • [32] Reinforcement Learning for Task Offloading in Mobile Edge Computing for SDN based Wireless Networks
    Kiran, Nahida
    Pan, Chunyu
    Yin Changchuan
    [J]. 2020 SEVENTH INTERNATIONAL CONFERENCE ON SOFTWARE DEFINED SYSTEMS (SDS), 2020, : 268 - 273
  • [33] Dependent Task Offloading for Edge Computing based on Deep Reinforcement Learning
    Wang, Jin
    Hu, Jia
    Min, Geyong
    Zhan, Wenhan
    Zomaya, Albert Y.
    Georgalas, Nektarios
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (10) : 2449 - 2461
  • [34] Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems
    Tang, Ming
    Wong, Vincent W. S.
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) : 1985 - 1997
  • [35] Deep Reinforcement Learning for Energy-Efficient Task Offloading in Cooperative Vehicular Edge Networks
    Agbaje, Paul
    Nwafor, Ebelechukwu
    Olufowobi, Habeeb
    [J]. 2023 IEEE 21ST INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS, INDIN, 2023,
  • [36] A Reinforcement Learning Approach for Cost- and Energy-Aware Mobile Data Offloading
    Zhang, Cheng
    Gu, Bo
    Liu, Zhi
    Yamori, Kyoko
    Tanaka, Yoshiaki
    [J]. 2016 18TH ASIA-PACIFIC NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM (APNOMS), 2016,
  • [37] SFC Consolidation: Energy-aware SFC Management using Deep Reinforcement Learning
    Jeong, Eui-Dong
    Yoo, Jae-Hyoung
    Hong, James Won-Ki
    [J]. PROCEEDINGS OF 2024 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, NOMS 2024, 2024,
  • [38] Energy-Aware Design Policy for Network Slicing Using Deep Reinforcement Learning
    Wang, Ranyin
    Friderikos, Vasilis
    Aghvami, A. Hamid
    [J]. IEEE Transactions on Services Computing, 2024, 17 (05): : 2378 - 2391
  • [39] Energy-Aware Multi-Server Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Naderializadeh, Navid
    Hashemi, Morteza
    [J]. CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 383 - 387
  • [40] An Energy-Aware Task Offloading Mechanism in Multiuser Mobile-Edge Cloud Computing
    Li, Lan
    Zhang, Xiaoyong
    Liu, Kaiyang
    Jiang, Fu
    Peng, Jun
    [J]. MOBILE INFORMATION SYSTEMS, 2018, 2018