Representation and Reinforcement Learning for Task Scheduling in Edge Computing

被引:18
|
作者
Tang, Zhiqing [1 ,2 ]
Jia, Weijia [2 ,3 ]
Zhou, Xiaojie [1 ]
Yang, Wenmian [1 ,2 ]
You, Yongjian [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[2] Univ Macau, State Key Lab IoT Smart City, Macau 999078, Peoples R China
[3] BNU UIC Joint AI Res Inst Beijing Normal Univ, Zhuhai 519087, Guangdong, Peoples R China
关键词
Edge computing; task scheduling; representation learning; reinforcement learning;
D O I
10.1109/TBDATA.2020.2990558
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, many deep reinforcement learning (DRL)-based task scheduling algorithms have been widely used in edge computing (EC) to reduce energy consumption. Unlike the existing algorithms considering fixed and fewer edge nodes (servers) and tasks, in this article, a representation model with a DRL based algorithm is proposed to adapt the dynamic change of nodes and tasks and solve the dimensional disaster in DRL caused by a massive scale. Specifically, 1) we apply the representation learning models to describe the different nodes and tasks in EC, i.e., nodes and tasks are mapped to corresponding vector sub-spaces to reduce the dimensions and store the vector space efficiently. 2) With the space after dimensionality reduction, a DRL-based algorithm is employed to learn the vector representations of nodes and tasks and make scheduling decisions. 3) The experiments are conducted with the real-world data set, and the results show that the proposed representation model with DRL-based algorithm outperforms the baselines 18.04 and 9.94 percent on average regarding energy consumption and service level agreement violation (SLAV), respectively.
引用
收藏
页码:795 / 808
页数:14
相关论文
共 50 条
  • [1] Deep Reinforcement Learning Based Task Scheduling in Edge Computing Networks
    Qi, Fan
    Li Zhuo
    Chen Xin
    [J]. 2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 835 - 840
  • [2] Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing
    Sheng, Shuran
    Chen, Peng
    Chen, Zhimin
    Wu, Lenan
    Yao, Yuxuan
    [J]. SENSORS, 2021, 21 (05) : 1 - 19
  • [3] Deep Reinforcement Learning based Task Scheduling Scheme in Mobile Edge Computing Network
    Zhao, Qi
    Feng, Mingjie
    Li, Li
    Li, Yi
    Liu, Hang
    Chen, Genshe
    [J]. SENSORS AND SYSTEMS FOR SPACE APPLICATIONS XIV, 2021, 11755
  • [4] Multiagent Meta-Reinforcement Learning for Optimized Task Scheduling in Heterogeneous Edge Computing Systems
    Niu, Liwen
    Chen, Xianfu
    Zhang, Ning
    Zhu, Yongdong
    Yin, Rui
    Wu, Celimuge
    Cao, Yangjie
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (12) : 10519 - 10531
  • [5] Task Scheduling Mechanism Based on Reinforcement Learning in Cloud Computing
    Wang, Yugui
    Dong, Shizhong
    Fan, Weibei
    [J]. MATHEMATICS, 2023, 11 (15)
  • [6] Neural Task Scheduling with Reinforcement Learning for Fog Computing Systems
    Bian, Simeng
    Huang, Xi
    Shao, Ziyu
    Yang, Yang
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [7] Deep Reinforcement Learning based Energy Scheduling for Edge Computing
    Yang, Qinglin
    Li, Peng
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 175 - 180
  • [8] Delay-sensitive Task Scheduling with Deep Reinforcement Learning in Mobile-edge Computing Systems
    Meng, Hao
    Chao, Daichong
    Guo, Qianying
    Li, Xiaowei
    [J]. 2019 3RD INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT 2019), 2019, 1229
  • [9] Multi-task scheduling in vehicular edge computing: a multi-agent reinforcement learning approach
    Zhao, Yiming
    Mo, Lei
    Liu, Ji
    [J]. CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2024,
  • [10] Task Migration Based on Reinforcement Learning in Vehicular Edge Computing
    Moon, Sungwon
    Park, Jaesung
    Lim, Yujin
    [J]. WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021