GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning

被引:0
|
作者
Elgabli, Anis
Park, Jihong
Bedi, Amrit S.
Bennis, Mehdi
Aggarwal, Vaneet
机构
关键词
OPTIMIZATION; CONSENSUS; CONVERGENCE; ALGORITHM; ADMM;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
When the data is distributed across multiple servers, lowering the communication cost between the servers (or workers) while solving the distributed learning problem is an important problem and is the focus of this paper. In particular, we propose a fast, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm, Group Alternating Direction Method of Multipliers (GADMM) is based on the Alternating Direction Method of Multipliers (ADMM) framework. The key novelty in GADMM is that it solves the problem in a decentralized topology where at most half of the workers are competing for the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges to the optimal solution for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets. Furthermore, we propose Dynamic GADMM (D-GADMM), a variant of GADMM, and prove its convergence under the time-varying network topology of the workers.
引用
收藏
页数:39
相关论文
共 50 条
  • [41] More communication-efficient distributed sparse learning
    Zhou, Xingcai
    Yang, Guang
    INFORMATION SCIENCES, 2024, 668
  • [42] More communication-efficient distributed sparse learning
    Zhou, Xingcai
    Yang, Guang
    Information Sciences, 2024, 668
  • [43] Communication Efficient Distributed Learning for Kernelized Contextual Bandits
    Li, Chuanhao
    Wang, Huazheng
    Wang, Mengdi
    Wang, Hongning
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [44] Stay Fresh: Speculative Synchronization for Fast Distributed Machine Learning
    Zhang, Chengliang
    Tian, Huangshi
    Wang, Wei
    Yan, Feng
    2018 IEEE 38TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2018, : 99 - 109
  • [45] CodedReduce: A Fast and Robust Framework for Gradient Aggregation in Distributed Learning
    Reisizadeh, Amirhossein
    Prakash, Saurav
    Pedarsani, Ramtin
    Avestimehr, Amir Salman
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2022, 30 (01) : 148 - 161
  • [46] Communication Efficient Federated Learning Framework with Local Momentum
    Xie, Renyou
    Zhou, Xiaojun
    2022 15TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION (HSI), 2022,
  • [47] Editorial: Introduction to the Issue on Distributed Machine Learning for Wireless Communication
    Yang, Ping
    Dobre, Octavia A.
    Xiao, Ming
    Di Renzo, Marco
    Li, Jun
    Quek, Tony Q. S.
    Han, Zhu
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (03) : 320 - 325
  • [48] A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
    Zhang, Jilin
    Tu, Hangdi
    Ren, Yongjian
    Wan, Jian
    Zhou, Li
    Li, Mingwei
    Wang, Jue
    Yu, Lifeng
    Zhao, Chang
    Zhang, Lei
    SENSORS, 2017, 17 (10)
  • [49] AdapCC: Making Collective Communication in Distributed Machine Learning Adaptive
    Zhao, Xiaoyang
    Zhang, Zhe
    Wu, Chuan
    2024 IEEE 44TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS 2024, 2024, : 25 - 35
  • [50] An Efficient Parallel Secure Machine Learning Framework on GPUs
    Zhang, Feng
    Chen, Zheng
    Zhang, Chenyang
    Zhou, Amelie Chi
    Zhai, Jidong
    Du, Xiaoyong
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (09) : 2262 - 2276