REACT: Reinforcement learning and multi-objective optimization for task scheduling in ultra-dense edge networks

被引:0
|
作者
Smithamol, M. B. [1 ]
Sridhar, Rajeswari [2 ]
机构
[1] LBS Inst Technol Women Trivandrum, Thiruvananthapuram, Kerala, India
[2] NIT Trichy, Trichy 620015, Tamil Nadu, India
关键词
Edge computing; Computation offloading; Resource allocation; Latency optimization; Sensitivity analysis; RESOURCE-ALLOCATION;
D O I
10.1016/j.adhoc.2025.103834
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper addresses the challenges of task scheduling and resource allocation in ultra-dense edge cloud (UDEC) networks, which integrate micro and macro base stations with diverse user equipment in 5G environments. To optimize system performance, we propose REACT, a novel two-level scheduling framework leveraging reinforcement learning (RL) for energy-efficient task scheduling. At the upper level, RL-based adaptive optimization replaces conventional power allocation techniques, dynamically minimizing transmission energy consumption under the Non-Orthogonal Multiple Access (NOMA) protocol. At the lower level, the joint task offloading and resource allocation problem is modeled as a multi-objective optimization challenge. This is solved using a hybrid approach combining meta-heuristic algorithms and Long Short-Term Memory (LSTM) predictive models, maximizing response rates and system throughput. Sensitivity analyses explore the effects of user density, channel quality, workload, and request size on performance. Comparative evaluations against state-of-the-art methods demonstrate the proposed framework's superior efficiency in tackling dynamic scheduling challenges, achieving energy savings and enhancing user experience.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] A reinforcement learning approach for dynamic multi-objective optimization
    Zou, Fei
    Yen, Gary G.
    Tang, Lixin
    Wang, Chunfeng
    INFORMATION SCIENCES, 2021, 546 : 815 - 834
  • [42] Multi-Objective Optimization in Disaster Backup with Reinforcement Learning
    Yi, Shanwen
    Qin, Yao
    Wang, Hua
    MATHEMATICS, 2025, 13 (03)
  • [43] Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks
    Kim, Eunjin
    Jung, Bang Chul
    Park, Chan Yi
    Lee, Howon
    ELECTRONICS, 2022, 11 (04)
  • [44] Reinforcement Learning-Based Optimization for Drone Mobility in 5G and Beyond Ultra-Dense Networks
    Tanveer, Jawad
    Haider, Amir
    Ali, Rashid
    Kim, Ajung
    CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 68 (03): : 3807 - 3823
  • [45] Multi-objective safe reinforcement learning: the relationship between multi-objective reinforcement learning and safe reinforcement learning
    Horie, Naoto
    Matsui, Tohgoroh
    Moriyama, Koichi
    Mutoh, Atsuko
    Inuzuka, Nobuhiro
    ARTIFICIAL LIFE AND ROBOTICS, 2019, 24 (03) : 352 - 359
  • [46] Multi-objective safe reinforcement learning: the relationship between multi-objective reinforcement learning and safe reinforcement learning
    Naoto Horie
    Tohgoroh Matsui
    Koichi Moriyama
    Atsuko Mutoh
    Nobuhiro Inuzuka
    Artificial Life and Robotics, 2019, 24 : 352 - 359
  • [47] A Resource and Task Scheduling Based Multi-Objective Optimization Model and Algorithms in Elastic Optical Networks
    Wang, Yuping
    Yang, Qingdong
    Guo, Xiaofang
    SENSORS, 2022, 22 (24)
  • [48] Reinforcement Learning based Adaptive Handover in Ultra-Dense Cellular Networks with Small Cells
    Liu, Qianyu
    Kwong, Chiew Foong
    Sun, Wei
    Li, Lincan
    Zhao, Haoyu
    INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND ROBOTICS 2020, 2020, 11574
  • [49] Joint Optimization of Microservice Deployment and Routing in Edge via Multi-Objective Deep Reinforcement Learning
    Hu, Menglan
    Wang, Hao
    Xu, Xiaohui
    He, Jianwen
    Hu, Yi
    Deng, Tianping
    Peng, Kai
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (06): : 6364 - 6381
  • [50] Edge offloading strategy for the multi-base station game in ultra-dense networks
    Wang R.
    Wu H.
    Cui Y.
    Wu D.
    Zhang H.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2021, 48 (04): : 1 - 10