Lever: Towards Low-Latency Batched Stream Processing by Pre-Scheduling

被引:5
|
作者
Chen, Fei [1 ]
Wu, Song [1 ]
Jin, Hai [1 ]
Yao, Yin [1 ]
Liu, Zhiyi [1 ]
Gu, Lin [1 ]
Zhou, Yongluan [2 ]
机构
[1] Huazhong Univ Sci & Technol, SCTS CGCL, Wuhan, Hubei, Peoples R China
[2] Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark
关键词
stream processing; recurring jobs; straggler; scheduling;
D O I
10.1145/3127479.3132687
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the vast involvement of streaming big data in many applications (e.g., stock market data, sensor data, social network data, etc.), quickly mining and analyzing such data is becoming more and more important. To provide fault tolerance and efficient stream processing at scale, recent stream processing frameworks have proposed to adapt batch processing systems, such as MapReduce and Spark, to handle streaming data by putting the streams into micro-batches and treating the workloads as a continuous series of small jobs [1]. The fundamental challenge of building a batched stream processing system is to minimize the processing latency of each micro-batch. In this paper, we focus on the straggler problem, where a subset of workers are straggling behind and significantly affecting the job completion time. The straggler problem is a well-known critical problem in parallel processing systems. In comparing to large batch processing, the straggler problems in micro-batch processing are more severe and harder to tackle. We argue that the problem of using the existing straggler mitigation solutions for micro-batch processing is that they detect (or predict) stragglers and re-schedule stragglers too late in the data handling pipeline. The re-scheduling actions are carried out during the task execution period, hence it would inevitably increase the processing time of the micro-batches. Furthermore, as the data have already been dispatched, re-scheduling would inherently incur expensive data relocation. Such overhead would become significant in micro-batch processing due to the short processing time of each micro-batch. We refer to this type of methods as post-scheduling techniques. To address the problem, we propose a new pre-scheduling framework, called Lever, which predicts stragglers and makes timely scheduling decisions to minimize the processing latency. As shown in Figure 1, Lever periodically collects and analyzes the historical job profiles of the recurring micro-batch jobs. Based on such information, Lever pre-schedules the data through three main steps, i.e. identify potential stragglers, evaluate node capacity and choose suitable helpers. More importantly, Lever makes the re-scheduling decisions before the batching module dispatches the data. As the scheduling is done while the data are being batched, it would not increase the processing time of the micro-batch. [GRAPHICAL ABSTRACT] We implemented Lever in Spark Streaming, which has been contributed to the open source community as an extension of Apache Spark Streaming. To the best of our knowledge, this is the first work specifically addressing the straggler problem in continuous micro-batch processing. We conduct various experiments to validate the effectiveness of Lever. The experimental results demonstrate that Lever reduces job completion time by 30.72% to 42.19% and outperforms traditional techniques significantly.
引用
收藏
页码:643 / 643
页数:1
相关论文
共 50 条
  • [1] Towards Low-Latency Batched Stream Processing by Pre-Scheduling
    Jin, Hai
    Chen, Fei
    Wu, Song
    Yao, Yin
    Liu, Zhiyi
    Gu, Lin
    Zhou, Yongluan
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2019, 30 (03) : 710 - 722
  • [2] TurboStream: Towards Low-Latency Data Stream Processing
    Wu, Song
    Liu, Mi
    Ibrahim, Shadi
    Jin, Hai
    Gu, Lin
    Chen, Fei
    Liu, Zhiyi
    [J]. 2018 IEEE 38TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2018, : 983 - 993
  • [3] Low-Latency Scheduling in MPTCP
    Hurtig, Per
    Grinnemo, Karl-Johan
    Brunstrom, Anna
    Ferlin, Simone
    Alay, Ozgu
    Kuhn, Nicolas
    [J]. IEEE-ACM TRANSACTIONS ON NETWORKING, 2019, 27 (01) : 302 - 315
  • [4] PRE-SCHEDULING ALGORITHM - SCHEDULING A SUITABLE MIX PRIOR TO PROCESSING
    FORBES, K
    GOLDSWORTHY, AW
    [J]. COMPUTER JOURNAL, 1977, 20 (01): : 27 - 29
  • [5] A Distributed and Scalable Framework for Low-Latency Continuous Trajectory Stream Processing
    Shaikh, Salman Ahmed
    Kitagawa, Hiroyuki
    Matono, Akiyoshi
    Kim, Kyoung-Sook
    [J]. IEEE Access, 2024, 12 : 159426 - 159444
  • [6] Hazelcast Jet: Low-latency Stream Processing at the 99.99th Percentile
    Gencer, Can
    Topolnik, Marko
    Durina, Viliam
    Demirci, Emin
    Kahveci, Ensar B.
    Gurbuz, Ali
    Lukas, Ondrej
    Bartok, Jozsef
    Gierlach, Grzegorz
    Hartman, Frantisek
    Yilmaz, Ufuk
    Dogan, Mehmet
    Mandouh, Mohamed
    Fragkoulis, Marios
    Katsifodimos, Asterios
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2021, 14 (12): : 3110 - 3121
  • [7] Viper: Communication-Layer Determinism and Scaling in Low-Latency Stream Processing
    Walulya, Ivan
    Nikolakopoulos, Yiannis
    Gulisano, Vincenzo
    Papatriantafilou, Marina
    Tsigas, Philippas
    [J]. EURO-PAR 2017: PARALLEL PROCESSING WORKSHOPS, 2018, 10659 : 129 - 140
  • [8] Viper: A module for communication-layer determinism and scaling in low-latency stream processing
    Walulya, Ivan
    Palyvos-Giannas, Dimitris
    Nikolakopoulos, Yiannis
    Gulisano, Vincenzo
    Papatriantafilou, Marina
    Tsigas, Philippas
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 88 : 297 - 308
  • [9] Demo Abstract: Towards In-Network Processing for Low-Latency Industrial Control
    Rueth, Jan
    Glebke, Rene
    Ulmen, Tanja
    Wehrle, Klaus
    [J]. IEEE INFOCOM 2018 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2018,
  • [10] Radar: Reducing Tail Latencies for Batched Stream Processing with Blank Scheduling
    Chen, Fei
    Wu, Song
    Jin, Hai
    Lin, Liwei
    Li, Rui
    [J]. IEEE 20TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS / IEEE 16TH INTERNATIONAL CONFERENCE ON SMART CITY / IEEE 4TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND SYSTEMS (HPCC/SMARTCITY/DSS), 2018, : 797 - 804