Multiagent Meta-Reinforcement Learning for Optimized Task Scheduling in Heterogeneous Edge Computing Systems

被引:4
|
作者
Niu, Liwen [1 ]
Chen, Xianfu [2 ]
Zhang, Ning [3 ]
Zhu, Yongdong [4 ]
Yin, Rui [5 ]
Wu, Celimuge [6 ,7 ]
Cao, Yangjie [1 ]
机构
[1] Zhengzhou Univ, Sch Cyber Sci & Engn, Zhengzhou 450001, Peoples R China
[2] VTT Tech Res Ctr Finland, Oulu 90570, Finland
[3] Univ Windsor, Dept Elect & Comp Engn, Windsor, ON N9B 3P4, Canada
[4] Zhejiang Lab, Intelligent Network Res, Hangzhou 311121, Peoples R China
[5] Zhejiang Univ City Coll, Informat Sci & Elect Engn, Hangzhou 310015, Peoples R China
[6] Univ Electro Commun, Grad Sch Informat & Engn, Tokyo 1828585, Japan
[7] Univ Electro Commun, Meta Networking Res Ctr, Tokyo 1828585, Japan
基金
中国国家自然科学基金;
关键词
Wireless fidelity; Task analysis; Processor scheduling; Edge computing; Servers; Scheduling; Training; Computation task scheduling; heterogeneous edge computing systems; Markov decision process (MDP); meta-learning; multiagent proximal policy optimization (PPO); RESOURCE-ALLOCATION;
D O I
10.1109/JIOT.2023.3241222
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile-edge computing (MEC) brings the potential to address the ever increasing computation demands from the mobile users (MUs). In addition to local processing, the resource-constrained MUs in an MEC system can also offload computation to the nearby servers for remote execution. With the explosive growth of mobile devices, computation offloading faces the challenge of spectrum congestion, which, in turn, deteriorates the overall quality of computation experience. This article, hence, investigates computation task scheduling in a heterogeneous cellular and WiFi MEC system. Such a system provides both licensed and unlicensed spectrum opportunities. Due to the sharing of communication and computation resources as well as the uncertainties, we formulate the problem of computation task scheduling among the competing MUs in a stationary heterogeneous edge computing system as a noncooperative stochastic game. We propose an approximation-based multiagent Markov decision process without the global system state observations, under which a multiagent proximal policy optimization (PPO) algorithm is derived to solve the corresponding Nash equilibrium. When expanding to a nonstationary heterogeneous edge computing system, the obtained algorithm suffers from the slow convergence due to constrained adaptability. Accordingly, we explore meta-learning and propose a multiagent meta-PPO algorithm, which rapidly adapts the control policy learning to the nonstationarity. Numerical experiments demonstrate performance gains from our proposed algorithms.
引用
收藏
页码:10519 / 10531
页数:13
相关论文
共 50 条
  • [1] Representation and Reinforcement Learning for Task Scheduling in Edge Computing
    Tang, Zhiqing
    Jia, Weijia
    Zhou, Xiaojie
    Yang, Wenmian
    You, Yongjian
    [J]. IEEE TRANSACTIONS ON BIG DATA, 2022, 8 (03) : 795 - 808
  • [2] HETS: Heterogeneous Edge and Task Scheduling Algorithm for Heterogeneous Computing Systems
    Masood, Anum
    Munir, Ehsan Ullah
    Rafique, M. Mustafa
    Khan, Samee U.
    [J]. 2015 IEEE 17TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS, 2015 IEEE 7TH INTERNATIONAL SYMPOSIUM ON CYBERSPACE SAFETY AND SECURITY, AND 2015 IEEE 12TH INTERNATIONAL CONFERENCE ON EMBEDDED SOFTWARE AND SYSTEMS (ICESS), 2015, : 1865 - 1870
  • [3] Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable Edge Computing Systems
    Munir, Md Shirajum
    Tran, Nguyen H.
    Saad, Walid
    Hong, Choong Seon
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2021, 18 (03): : 3353 - 3374
  • [4] A Graph Attention Mechanism-Based Multiagent Reinforcement-Learning Method for Task Scheduling in Edge Computing
    Li, Yinong
    Li, Jianbo
    Pang, Junjie
    [J]. ELECTRONICS, 2022, 11 (09)
  • [5] Multiagent Meta-Reinforcement Learning for Adaptive Multipath Routing Optimization
    Chen, Long
    Hu, Bin
    Guan, Zhi-Hong
    Zhao, Lian
    Shen, Xuemin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) : 5374 - 5386
  • [6] Meta-Reinforcement Learning via Exploratory Task Clustering
    Chu, Zhendong
    Cai, Renqin
    Wang, Hongning
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 11633 - 11641
  • [7] Deep Reinforcement Learning Based Task Scheduling in Edge Computing Networks
    Qi, Fan
    Li Zhuo
    Chen Xin
    [J]. 2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 835 - 840
  • [8] Meta-reinforcement learning for edge caching in vehicular networks
    Sakr H.
    Elsabrouty M.
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (04) : 4607 - 4619
  • [9] Neural Task Scheduling with Reinforcement Learning for Fog Computing Systems
    Bian, Simeng
    Huang, Xi
    Shao, Ziyu
    Yang, Yang
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [10] Multiagent Reinforcement Learning for Task Offloading of Space/Aerial-Assisted Edge Computing
    Li, Yanlong
    Liang, Lei
    Fu, Jielin
    Wang, Junyi
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022