Robust Distributed Bayesian Learning with Stragglers via Consensus Monte Carlo

被引:0
|
作者
Chittoor, Hari Hara Suthan [1 ]
Simeone, Osvaldo [1 ]
机构
[1] Kings Coll London, Dept Engn, KCLIP Lab, London, England
基金
欧洲研究理事会;
关键词
Distributed Bayesian learning; stragglers; Consensus Monte Carlo; grouping; coded computing;
D O I
10.1109/GLOBECOM48099.2022.10001070
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper studies distributed Bayesian learning in a setting encompassing a central server and multiple workers by focusing on the problem of mitigating the impact of stragglers. The standard one-shot, or embarrassingly parallel, Bayesian learning protocol known as consensus Monte Carlo (CMC) is generalized by proposing two straggler-resilient solutions based on grouping and coding. Two main challenges in designing straggler-resilient algorithms for CMC are the need to estimate the statistics of the workers' outputs across multiple shots, and the joint non-linear post-processing of the outputs of the workers carried out at the server. This is in stark contrast to other distributed settings like gradient coding, which only require the per-shot sum of the workers' outputs. The proposed methods, referred to as Group-based CMC (G-CMC) and Coded CMC (C-CMC), leverage redundant computing at the workers in order to enable the estimation of global posterior samples at the server based on partial outputs from the workers. Simulation results show that C-CMC may outperform G-CMC for a small number of workers, while G-CMC is generally preferable for a larger number of workers.
引用
收藏
页码:609 / 614
页数:6
相关论文
共 50 条
  • [1] Comparing consensus Monte Carlo strategies for distributed Bayesian computation
    Scott, Steven L.
    [J]. BRAZILIAN JOURNAL OF PROBABILITY AND STATISTICS, 2017, 31 (04) : 668 - 685
  • [2] Distributed evolutionary Monte Carlo for Bayesian computing
    Hu, Bo
    Tsui, Kam-Wah
    [J]. COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2010, 54 (03) : 688 - 697
  • [3] Robust design using Bayesian Monte Carlo
    Kumar, Apurva
    Nair, Prasanth B.
    Keane, Andy J.
    Shahpar, Shahrokh
    [J]. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 2008, 73 (11) : 1497 - 1517
  • [4] Monte Carlo Bayesian Hierarchical Reinforcement Learning
    Ngo Anh Vien
    Hung Ngo
    Ertel, Wolfgang
    [J]. AAMAS'14: PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS, 2014, : 1551 - 1552
  • [5] Consensus Variational and Monte Carlo Algorithms for Bayesian Nonparametric Clustering
    Ni, Yang
    Jones, David
    Wang, Zeya
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 204 - 209
  • [6] A practical Monte Carlo implementation of Bayesian learning
    Rasmussen, CE
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 8: PROCEEDINGS OF THE 1995 CONFERENCE, 1996, 8 : 598 - 604
  • [7] Distributed Detection via Bayesian Updates and Consensus
    Liu Qipeng
    Zhao Jiuhua
    Wang Xiaofan
    [J]. 2015 34TH CHINESE CONTROL CONFERENCE (CCC), 2015, : 6992 - 6997
  • [8] Channel-Driven Monte Carlo Sampling for Bayesian Distributed Learning in Wireless Data Centers
    Liu, Dongzhu
    Simeone, Osvaldo
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (02) : 562 - 577
  • [9] New prior distribution for Bayesian neural network and learning via Hamiltonian Monte Carlo
    Ramchoun, Hassan
    Ettaouil, Mohamed
    [J]. EVOLVING SYSTEMS, 2020, 11 (04) : 661 - 671
  • [10] Bayesian Computation Via Markov Chain Monte Carlo
    Craiu, Radu V.
    Rosenthal, Jeffrey S.
    [J]. ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, VOL 1, 2014, 1 : 179 - 201