Coding-Based Performance Improvement of Distributed Machine Learning in Large-Scale Clusters

被引:0
|
作者
Wang Y. [1 ]
Li N. [1 ]
Wang X. [1 ]
Zhong F. [1 ]
机构
[1] School of Software, East China Jiaotong University, Nanchang
基金
中国国家自然科学基金;
关键词
Coding technology; Distributed computing; Machine learning; Performance improvement; Stragglers tolerate;
D O I
10.7544/issn1000-1239.2020.20190286
中图分类号
学科分类号
摘要
With the growth of models and data sets, running large-scale machine learning algorithms in distributed clusters has become a common method. This method divides the whole machine learning algorithm and training data into several tasks and each task runs on different worker nodes. Then, the results of all tasks are combined by master node to get the results of the whole algorithm. When there are a large number of nodes in distributed cluster, some worker nodes, called straggler, will inevitably slow down than other nodes due to resource competition and other reasons, which makes the task time of running on this node significantly higher than that of other nodes. Compared with running replica task on multiple nodes, coded computing shows an impact of efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in large-scale machine learning cluster.This paper introduces the research progress of solving the straggler issues and improving the performance of large-scale machine learning cluster based on coding technology. Firstly, we introduce the background of coding technology and large-scale machine learning cluster. Secondly, we divide the related research into several categories according to application scenarios: matrix multiplication, gradient computing, data shuffling and some other applications. Finally, we summarize the difficulties of applying coding technology in large-scale machine learning cluster and discuss the future research trends about it. © 2020, Science Press. All right reserved.
引用
收藏
页码:542 / 561
页数:19
相关论文
共 62 条
  • [41] Charles Z., Papailiopoulos D., Ellenberg J., Approximate gradient coding via sparse random graphs, (2017)
  • [42] Tandon R., Lei Q., Dimakis A.G., Et al., Gradient coding, (2016)
  • [43] Li S., Maddah-Ali M.A., Avestimehr A.S., Coded mapreduce, Proc of the 53rd Annual Allerton Conf on Communication, Control, and Computing, pp. 964-971, (2015)
  • [44] Lee K.H., Lee Y.J., Choi H., Et al., Parallel data processing with MapReduce: A survey, ACM SIGMOD Record, 40, 4, pp. 11-20, (2012)
  • [45] Jiang D., Ooi B.C., Shi L., Et al., The performance of mapreduce: An in-depth study, Proceedings of the VLDB Endowment, 3, 1-2, pp. 472-483, (2010)
  • [46] Li S., Maddah-Ali M.A., Avestimehr A.S., A unified coding framework for distributed computing with straggling servers, (2016)
  • [47] Li S., Supittayapornpong S., Maddah-Ali M.A., Et al., Coded terasort, Proc of IEEE Int Parallel and Distributed Processing Symp Workshops (IPDPSW), pp. 389-398, (2017)
  • [48] Song L., Fragouli C., A pliable index coding approach to data shuffling, (2017)
  • [49] Attia M.A., Tandon R., Near optimal coded data shuffling for distributed learning, (2018)
  • [50] Wan K., Tuninetti D., Piantanida P., On the optimality of uncoded cache placement, Proc of IEEE Information Theory Workshop (ITW), pp. 161-165, (2016)