Multi-Objective Deep Reinforcement Learning for Efficient Workload Orchestration in Extreme Edge Computing

被引:0
|
作者
Safavifar, Zahra [1 ]
Gyamfi, Eric [1 ]
Mangina, Eleni [1 ]
Golpayegani, Fatemeh [1 ]
机构
[1] Univ Coll Dublin, Sch Comp Sci, Dublin 4, Ireland
来源
IEEE ACCESS | 2024年 / 12卷
基金
爱尔兰科学基金会;
关键词
Task analysis; Edge computing; Computational modeling; Deep reinforcement learning; Servers; Dynamic scheduling; Resource management; Reinforcement learning; workload orchestration; deep reinforcement learning (DRL); resource-constrained environment; extreme edge computing; RESOURCE-ALLOCATION;
D O I
10.1109/ACCESS.2024.3405411
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Workload orchestration at the edge of the network has become increasingly challenging with the ever-increasing penetration of resource demanding mobile, and heterogeneous devices offering low latency services. Literature has addressed this challenge assuming the availability of multi-access Mobile Edge Computing (MEC) servers and placing the computing tasks related to such services on the MEC servers. However, to develop a more sustainable and energy-efficient computing paradigm, for applications operating in stochastic environments with unpredictable workloads, it is essential to minimize the MEC servers' usage, and utilize the available resource-constrained edge devices, to keep the resourceful servers idle for handling any unpredictable larger workload. In this paper, we proposed DEWOrch, a deep reinforcement Learning algorithm for efficient workload orchestration. DEWOrch's aim is to increase the utilization of resource-constrained edge devices and minimize resource waste for more sustainable and energy efficient computing solution. This model is evaluated in an Extreme Edge Computing environment, where no MEC servers is available and only edge devices with constrained capacity are used to perform tasks. The results show that DEWOrch outperforms the state-of-the-art methods by around 50% decrease in resource waste while improved task success rate, and decreased energy consumption per task in most scenarios.
引用
收藏
页码:74558 / 74571
页数:14
相关论文
共 50 条
  • [41] Exploring Multi-Objective Deep Reinforcement Learning Methods for Drug Design
    Al Jumaily, Aws
    Mukaidaisi, Muhetaer
    Vu, Andrew
    Tchagang, Alain
    Li, Yifeng
    2022 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE IN BIOINFORMATICS AND COMPUTATIONAL BIOLOGY (IEEE CIBCB 2022), 2022, : 107 - 114
  • [42] Collaborative Deep Reinforcement Learning Method for Multi-Objective Parameter Tuning
    Luo S.
    Wei J.
    Liu X.
    Pan L.
    Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2022, 42 (09): : 969 - 975
  • [43] Multi-objective deep inverse reinforcement learning for weight estimation of objectives
    Naoya Takayama
    Sachiyo Arai
    Artificial Life and Robotics, 2022, 27 : 594 - 602
  • [44] Deep reinforcement learning for a multi-objective operation in a nuclear power plant
    Bae, Junyong
    Kim, Jae Min
    Lee, Seung Jun
    NUCLEAR ENGINEERING AND TECHNOLOGY, 2023, 55 (09) : 3277 - 3290
  • [45] Deep Reinforcement Learning for Solving Multi-objective Vehicle Routing Problem
    Zhang, Jian
    Hu, Rong
    Wang, Yi-Jun
    Yang, Yuan-Yuan
    Qian, Bin
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT I, 2023, 14086 : 146 - 155
  • [46] Deep Reinforcement Learning based Multi-Objective Systems for Financial Trading
    Bisht, Kiran
    Kumar, Arun
    2020 5TH IEEE INTERNATIONAL CONFERENCE ON RECENT ADVANCES AND INNOVATIONS IN ENGINEERING (IEEE - ICRAIE-2020), 2020,
  • [47] Multi-objective deep inverse reinforcement learning for weight estimation of objectives
    Takayama, Naoya
    Arai, Sachiyo
    ARTIFICIAL LIFE AND ROBOTICS, 2022, 27 (03) : 594 - 602
  • [48] Examining multi-objective deep reinforcement learning frameworks for molecular design
    Al-Jumaily, Aws
    Mukaidaisi, Muhetaer
    Vu, Andrew
    Tchagang, Alain
    Li, Yifeng
    BIOSYSTEMS, 2023, 232
  • [49] Multi-Objective Deep Reinforcement Learning for Crowd Route Guidance Optimization
    Nishida, Ryo
    Tanigaki, Yuki
    Onishi, Masaki
    Hashimoto, Koichi
    TRANSPORTATION RESEARCH RECORD, 2024, 2678 (05) : 617 - 633
  • [50] Data transmission optimization based on multi-objective deep reinforcement learning
    Wang, Cuiping
    Li, Xiaole
    Tian, Jinwei
    Yin, Yilong
    COMPUTER JOURNAL, 2024, 68 (02): : 201 - 215