Cloud Resource Scheduling With Deep Reinforcement Learning and Imitation Learning

被引:53
|
作者
Guo, Wenxia [1 ]
Tian, Wenhong [1 ]
Ye, Yufei [1 ]
Xu, Lingxiao [1 ]
Wu, Kui [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu 610054, Peoples R China
[2] Univ Victoria, Dept Comp Sci, Victoria, BC V8P 5C2, Canada
来源
IEEE INTERNET OF THINGS JOURNAL | 2021年 / 8卷 / 05期
基金
中国国家自然科学基金;
关键词
Resource management; Cloud computing; Machine learning; Task analysis; Dynamic scheduling; Processor scheduling; Learning (artificial intelligence); Cloud resource scheduling; deep reinforcement learning (deep RL); imitation learning; DYNAMIC CONSOLIDATION; VIRTUAL MACHINES; MANAGEMENT; ENERGY; ALGORITHM; GAME; GO;
D O I
10.1109/JIOT.2020.3025015
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The cloud resource management belongs to the category of combinatorial optimization problems, most of which have been proven to be NP-hard. In recent years, reinforcement learning (RL), as a special paradigm of machine learning, has been used to tackle these NP-hard problems. In this article, we present a deep RL-based solution, called DeepRM_Plus, to efficiently solve different cloud resource management problems. We use a convolutional neural network to capture the resource management model and utilize imitation learning in the reinforcement process to reduce the training time of the optimal policy. Compared with the state-of-the-art algorithm DeepRM, DeepRM_Plus is 37.5% faster in terms of the convergence rate. Moreover, DeepRM_Plus reduces the average weighted turnaround time and the average cycling time by 51.85% and 11.51%, respectively.
引用
收藏
页码:3576 / 3586
页数:11
相关论文
共 50 条
  • [11] Optimized intellectual resource scheduling using deep reinforcement Q-learning in cloud computing
    Uma, J.
    Vivekanandan, P.
    Shankar, S.
    TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, 2022, 33 (05):
  • [12] Deep reinforcement learning-based algorithms selectors for the resource scheduling in hierarchical Cloud computing
    Zhou G.
    Wen R.
    Tian W.
    Buyya R.
    Journal of Network and Computer Applications, 2022, 208
  • [13] Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning
    Hua, Jiang
    Zeng, Liangcai
    Li, Gongfa
    Ju, Zhaojie
    SENSORS, 2021, 21 (04) : 1 - 21
  • [14] Learning for a robot: Deep reinforcement learning, imitation learning, transfer learning
    Hua, Jiang
    Zeng, Liangcai
    Li, Gongfa
    Ju, Zhaojie
    Sensors (Switzerland), 2021, 21 (04): : 1 - 21
  • [15] Workflow scheduling based on deep reinforcement learning in the cloud environment
    Tingting Dong
    Fei Xue
    Chuangbai Xiao
    Jiangjiang Zhang
    Journal of Ambient Intelligence and Humanized Computing, 2021, 12 : 10823 - 10835
  • [16] Deep Reinforcement Learning for Dynamic Workflow Scheduling in Cloud Environment
    Dong, Tingting
    Xue, Fei
    Xiao, Changbai
    Zhang, Jiangjiang
    2021 IEEE INTERNATIONAL CONFERENCE ON SERVICES COMPUTING (SCC 2021), 2021, : 107 - 115
  • [17] Workflow scheduling based on deep reinforcement learning in the cloud environment
    Dong, Tingting
    Xue, Fei
    Xiao, Chuangbai
    Zhang, Jiangjiang
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 12 (12) : 10823 - 10835
  • [18] Utilizing Deep Reinforcement Learning for Resource Scheduling in Virtualized Clouds
    Nashaat, Mona
    Nashaat, Heba
    ENGINEERING SOLUTIONS TOWARD SUSTAINABLE DEVELOPMENT, ESSD 2023, 2024, : 471 - 484
  • [19] Radio Resource Scheduling with Deep Pointer Networks and Reinforcement Learning
    AL-Tam, F.
    Mazayev, A.
    Correia, N.
    Rodriguez, J.
    2020 IEEE 25TH INTERNATIONAL WORKSHOP ON COMPUTER AIDED MODELING AND DESIGN OF COMMUNICATION LINKS AND NETWORKS (CAMAD), 2020,
  • [20] ReCARL: Resource Allocation in Cloud RANs With Deep Reinforcement Learning
    Xu, Zhiyuan
    Tang, Jian
    Yin, Chengxiang
    Wang, Yanzhi
    Xue, Guoliang
    Wang, Jing
    Gursoy, M. Cenk
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (07) : 2533 - 2545