Virtual Machine Placement Via Q-Learning with Function Approximation

被引:5
|
作者
Duong, Thai [1 ]
Chu, Yu-Jung [1 ]
Thinh Nguyen [1 ]
Chakareski, Jacob [2 ]
机构
[1] Oregon State Univ, Sch Elect Engn & Comp Sci, Corvallis, OR 97331 USA
[2] Univ Alabama, Dept Elect & Comp Engn, Tuscaloosa, AL 35487 USA
关键词
D O I
10.1109/GLOCOM.2015.7417491
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
While existing virtual machine technologies provide easy-to-use platforms for distributed computing applications, many are far from efficient and not designed to accommodate diverse objectives, which dramatically penalizes their performance. These shortcomings arise from 1) not having a formal optimization framework that readily leads to algorithmic solutions for diverse objectives; 2) not incorporating the knowledge of the underlying network topologies and the communication/interaction patterns among the virtual machines/services, and 3) not considering the time-varying aspects of real-world environments. This paper formalizes an optimization framework and develops corresponding algorithmic solutions using Markov Decision Process and Q-Learning for virtual machine/service placement and migration for distributed computing in timevarying environments. Importantly, the knowledge of the underlying topologies of the computing infrastructure, the interaction patterns between the virtual machines, and the dynamics of the supported applications will be formally characterized and incorporated into the proposed algorithms in order to improve performance. Simulation results for small-scale and large-scale networks are provided to verify our solution approach.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Dynamic single machine scheduling using Q-learning agent
    Kong, LF
    Wu, J
    Proceedings of 2005 International Conference on Machine Learning and Cybernetics, Vols 1-9, 2005, : 3237 - 3241
  • [42] Strategies of Market Game Behavior of Virtual Power Plants Based on Q-learning With Augmented Lagrange Function
    Liu T.
    Han D.
    Wang Y.
    Dong X.
    Dianwang Jishu/Power System Technology, 2021, 45 (10): : 4000 - 4008
  • [43] Q-learning as a model of utilitarianism in a human-machine team
    Krening, Samantha
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (23): : 16853 - 16864
  • [44] Multi-Agent Q-Learning with Joint State Value Approximation
    Chen Gang
    Cao Weihua
    Chen Xin
    Wu Min
    2011 30TH CHINESE CONTROL CONFERENCE (CCC), 2011, : 4878 - 4882
  • [45] Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning
    Ohnishi, Shota
    Uchibe, Eiji
    Yamaguchi, Yotaro
    Nakanishi, Kosuke
    Yasui, Yuji
    Ishii, Shin
    FRONTIERS IN NEUROROBOTICS, 2019, 13
  • [46] Machine learning via multiresolution approximation
    Blayvas, I
    Kimmel, R
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2003, E86D (07) : 1172 - 1180
  • [47] Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
    Weissenbacher, Matthias
    Sinha, Samarth
    Garg, Animesh
    Kawahara, Yoshinobu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [48] Virtual machine migration policy for multi-tier application in cloud computing based on Q-learning algorithm
    Cong Hung Tran
    Thanh Khiet Bui
    Tran Vu Pham
    COMPUTING, 2022, 104 (06) : 1285 - 1306
  • [49] Virtual machine migration policy for multi-tier application in cloud computing based on Q-learning algorithm
    Cong Hung Tran
    Thanh Khiet Bui
    Tran Vu Pham
    Computing, 2022, 104 : 1285 - 1306
  • [50] Learning rates for Q-Learning
    Even-Dar, E
    Mansour, Y
    COMPUTATIONAL LEARNING THEORY, PROCEEDINGS, 2001, 2111 : 589 - 604