Mary, Hugo, and Hugo*: Learning to schedule distributed data-parallel processing jobs on shared clusters

被引:3
|
作者
Thamsen, Lauritz [1 ]
Beilharz, Jossekin [2 ]
Vinh Thuy Tran [1 ,3 ]
Nedelkoski, Sasho [1 ]
Kao, Odej [1 ]
机构
[1] Tech Univ Berlin, Complex & Distributed IT Syst, Berlin, Germany
[2] Univ Potsdam, Hasso Plattner Inst, Operating Syst & Middleware Grp, Potsdam, Germany
[3] Thryve mHlth Pioneers GmbH, Berlin, Germany
来源
关键词
cluster resource management; distributed data-parallel processing; job co-location; reinforcement learning; self-learning scheduler;
D O I
10.1002/cpe.5823
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Distributed data-parallel processing systems like MapReduce, Spark, and Flink are popular for analyzing large datasets using cluster resources. Resource management systems like YARN or Mesos in turn allow multiple data-parallel processing jobs to share cluster resources in temporary containers. Often, the containers do not isolate resource usage to achieve high degrees of overall resource utilization despite overprovisioning and the often fluctuating utilization of specific jobs. However, some combinations of jobs utilize resources better and interfere less with each other when running on the same shared nodes than others. This article presents an approach for improving the resource utilization and job throughput when scheduling recurring distributed data-parallel processing jobs in shared clusters. The approach is based on reinforcement learning and a measure of co-location goodness to have cluster schedulers learn over time which jobs are best executed together on shared resources. We evaluated this approach over the last years with three prototype schedulers that build on each other: Mary, Hugo, and Hugo*. For the evaluation we used exemplary Flink and Spark jobs from different application domains and clusters of commodity nodes managed by YARN. The results of these experiments show that our approach can increase resource utilization and job throughput significantly.
引用
收藏
页数:12
相关论文
共 15 条
  • [1] Hugo: A Cluster Scheduler that Efficiently Learns to Select Complementary Data-Parallel Jobs
    Thamsen, Lauritz
    Verbitskiy, Ilya
    Nedelkoski, Sasho
    Vinh Thuy Tran
    Meyer, Vinicius
    Xavier, Miguel G.
    Kao, Odej
    De Rose, Cesar A. F.
    EURO-PAR 2019: PARALLEL PROCESSING WORKSHOPS, 2020, 11997 : 519 - 530
  • [2] Blockchain Assisted Trust Management for Data-Parallel Distributed Learning
    Song, Yuxiao
    He, Daojing
    Dai, Minghui
    Chan, Sammy
    Choo, Kim-Kwang Raymond
    Guizani, Mohsen
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (05) : 3826 - 3843
  • [3] ReLoca: Optimize Resource Allocation for Data-parallel Jobs using Deep Learning
    Hu, Zhiyao
    Li, Dongsheng
    Zhang, Dongxiang
    Chen, Yixin
    IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2020, : 1163 - 1171
  • [4] Collaborative Cluster Configuration for Distributed Data-Parallel Processing: A Research Overview
    Thamsen, Lauritz
    Scheinert, Dominik
    Will, Jonathan
    Bader, Jonathan
    Kao, Odej
    Datenbank-Spektrum, 2022, 22 (02) : 143 - 151
  • [5] Efficient Data-Parallel Continual Learning with Asynchronous Distributed Rehearsal Buffers
    Bouvier, Thomas
    Nicolae, Bogdan
    Chaugier, Hugo
    Costan, Alexandru
    Foster, Ian
    Antoniu, Gabriel
    2024 IEEE 24TH INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING, CCGRID 2024, 2024, : 245 - 254
  • [6] A characterization of soft-error sensitivity in data-parallel and model-parallel distributed deep learning
    Rojas, Elvis
    Perez, Diego
    Meneses, Esteban
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2024, 190
  • [7] Parallelizing Machine Learning Optimization Algorithms on Distributed Data-Parallel Platforms with Parameter Server
    Gu, Rong
    Fan, Shiqing
    Hu, Qiu
    Yuan, Chunfeng
    Huang, Yihua
    2018 IEEE 24TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS 2018), 2018, : 126 - 133
  • [8] AccDP: Accelerated Data-Parallel Distributed DNN Training for Modern GPU-Based HPC Clusters
    Alnaasan, Nawras
    Jain, Arpan
    Shafi, Aamir
    Subramoni, Hari
    Panda, Dhabaleswar K.
    2022 IEEE 29TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS, HIPC, 2022, : 32 - 41
  • [9] Compressed Collective Sparse-Sketch for Distributed Data-Parallel Training of Deep Learning Models
    Ge, Keshi
    Lu, Kai
    Fu, Yongquan
    Deng, Xiaoge
    Lai, Zhiquan
    Li, Dongsheng
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (04) : 941 - 963
  • [10] Exploiting Distributed-Memory and Shared-Memory Parallelism on Clusters of SMPs with Data Parallel Programs
    Siegfried Benkner
    Viera Sipkova
    International Journal of Parallel Programming, 2003, 31 : 3 - 19