Model-Free Control for Distributed Stream Data Processing using Deep Reinforcement Learning

被引:52
|
作者
Li, Teng [1 ]
Xu, Zhiyuan [1 ]
Tang, Jian [1 ]
Wang, Yanzhi [1 ]
机构
[1] Syracuse Univ, Dept Elect Engn & Comp Sci, Syracuse, NY 13244 USA
来源
PROCEEDINGS OF THE VLDB ENDOWMENT | 2018年 / 11卷 / 06期
关键词
SYSTEMS;
D O I
10.14778/3184470.3184474
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we focus on general-purpose Distributed Stream Data Processing Systems (DSDPSs), which deal with processing of unbounded streams of continuous data at scale distributedly in real or near-real time. A fundamental problem in a DSDPS is the scheduling problem (i.e., assigning workload to workers/machines) with the objective of minimizing average end-to-end tuple processing time. A widely-used solution is to distribute workload evenly over machines in the cluster in a round-robin manner, which is obviously not efficient due to lack of consideration for communication delay. Model-based approaches (such as queueing theory) do not work well either due to the high complexity of the system environment. We aim to develop a novel model-free approach that can learn to well control a DSDPS from its experience rather than accurate and mathematically solvable system models, just as a human learns a skill (such as cooking, driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in DSDPSs; and present design, implementation and evaluation of a novel and highly effective DRL-based control framework, which minimizes average end-to-end tuple processing time by jointly learning the system environment via collecting very limited runtime statistics data and making decisions under the guidance of powerful Deep Neural Networks (DNNs). To validate and evaluate the proposed framework, we implemented it based on a widely-used DSDPS, Apache Storm, and tested it with three representative applications: continuous queries, log stream processing and word count (stream version). Extensive experimental results show 1) Compared to Storm's default scheduler and the state-of-the-art model-based method, the proposed framework reduces average tuple processing by 33.5% and 14.0% respectively on average. 2) The proposed framework can quickly reach a good scheduling solution during online learning, which justifies its practicability for online control in DSDPSs.
引用
收藏
页码:705 / 718
页数:14
相关论文
共 50 条
  • [1] On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee
    Mukherjee, Sayak
    Vu, Thanh Long
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (05): : 1615 - 1620
  • [2] On Distributed Model-Free Reinforcement Learning Control with Stability Guarantee
    Mukherjee, Sayak
    Thanh Long Vu
    [J]. 2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 2175 - 2180
  • [3] Control of a Wave Energy Converter Using Model-free Deep Reinforcement Learning
    Chen, Kemeng
    Huang, Xuanrui
    Lin, Zechuan
    Xiao, Xi
    [J]. 2024 UKACC 14TH INTERNATIONAL CONFERENCE ON CONTROL, CONTROL, 2024, : 1 - 6
  • [4] DATA-DRIVEN MODEL-FREE ITERATIVE LEARNING CONTROL USING REINFORCEMENT LEARNING
    Song, Bing
    Phan, Minh Q.
    Longman, Richard W.
    [J]. ASTRODYNAMICS 2018, PTS I-IV, 2019, 167 : 2579 - 2597
  • [5] Model-free Data-driven Predictive Control Using Reinforcement Learning
    Sawant, Shambhuraj
    Reinhardt, Dirk
    Kordabad, Arash Bahari
    Gros, Sebastien
    [J]. 2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 4046 - 4052
  • [6] Model-Free Decentralized Reinforcement Learning Control of Distributed Energy Resources
    Mukherjee, Sayak
    Bai, He
    Chakrabortty, Aranya
    [J]. 2020 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2020,
  • [7] DeepPool: Distributed Model-Free Algorithm for Ride-Sharing Using Deep Reinforcement Learning
    Al-Abbasi, Abubakr O.
    Ghosh, Arnob
    Aggarwal, Vaneet
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019, 20 (12) : 4714 - 4727
  • [8] Control of neural systems at multiple scales using model-free, deep reinforcement learning
    Mitchell, B. A.
    Petzold, L. R.
    [J]. SCIENTIFIC REPORTS, 2018, 8
  • [9] Control of neural systems at multiple scales using model-free, deep reinforcement learning
    B. A. Mitchell
    L. R. Petzold
    [J]. Scientific Reports, 8
  • [10] Model-free learning control of neutralization processes using reinforcement learning
    Syafiie, S.
    Tadeo, F.
    Martinez, E.
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2007, 20 (06) : 767 - 782