Distributed Learning of Fully Connected Neural Networks using Independent Subnet Training

被引:11
|
作者
Yuan, Binhang [1 ]
Wolfe, Cameron R. [1 ]
Dun, Chen [1 ]
Tang, Yuxin [1 ]
Kyrillidis, Anastasios [1 ]
Jermaine, Chris [1 ]
机构
[1] Rice Univ, Houston, TX 77251 USA
来源
PROCEEDINGS OF THE VLDB ENDOWMENT | 2022年 / 15卷 / 08期
关键词
ALGORITHMS;
D O I
10.14778/3529337.3529343
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed machine learning (ML) can bring more computational resources to bear than single-machine learning, thus enabling reductions in training time. Distributed learning partitions models and data over many machines, allowing model and dataset sizes beyond the available compute power and memory of a single machine. In practice though, distributed ML is challenging when distribution is mandatory, rather than chosen by the practitioner. In such scenarios, data could unavoidably be separated among workers due to limited memory capacity per worker or even because of data privacy issues. There, existing distributed methods will utterly fail due to dominant transfer costs across workers, or do not even apply. We propose a new approach to distributed fully connected neural network learning, called independent subnet training (IST), to handle these cases. In IST, the original network is decomposed into a set of narrow subnetworks with the same depth. These subnetworks are then trained locally before parameters are exchanged to produce new subnets and the training cycle repeats. Such a naturally lmodel parallelz approach limits memory usage by storing only a portion of network parameters on each device. Additionally, no requirements exist for sharing data between workers (i.e., subnet training is local and independent) and communication volume and frequency are reduced by decomposing the original network into independent subnets. These properties of IST can cope with issues due to distributed data, slow interconnects, or limited device memory, making IST a suitable approach for cases of mandatory distribution. We show experimentally that IST results in training times that are much lower than common distributed learning approaches.
引用
收藏
页码:1581 / 1590
页数:10
相关论文
共 50 条
  • [31] The stability of the solution in fully connected neural networks with the encouragement factor
    Nitta, T
    ICONIP'98: THE FIFTH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING JOINTLY WITH JNNS'98: THE 1998 ANNUAL CONFERENCE OF THE JAPANESE NEURAL NETWORK SOCIETY - PROCEEDINGS, VOLS 1-3, 1998, : 518 - 521
  • [32] Distributed Trace Ratio Optimization in Fully-Connected Sensor Networks
    Musluoglu, Cem Ates
    Bertrand, Alexander
    28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 1991 - 1995
  • [33] LEARNING CONNECTED ATTENTIONS FOR CONVOLUTIONAL NEURAL NETWORKS
    Ma, Xu
    Guo, Jingda
    Tang, Sihai
    Qiao, Zhinan
    Chen, Qi
    Yang, Qing
    Fu, Song
    Palacharla, Paparao
    Wang, Nannan
    Wang, Xi
    Proceedings - IEEE International Conference on Multimedia and Expo, 2021,
  • [34] Training Spiking Neural Networks Using Combined Learning Approaches
    Elbrecht, Daniel
    Parsa, Maryam
    Kulkarni, Shruti R.
    Mitchell, J. Parker
    Schuman, Catherine D.
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 1995 - 2001
  • [35] Privacy preserving distributed training of neural networks
    Nikolaidis, Spyridon
    Refanidis, Ioannis
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (23): : 17333 - 17350
  • [36] Privacy preserving distributed training of neural networks
    Spyridon Nikolaidis
    Ioannis Refanidis
    Neural Computing and Applications, 2020, 32 : 17333 - 17350
  • [37] A framework for parallel and distributed training of neural networks
    Scardapane, Simone
    Di Lorenzo, Paolo
    NEURAL NETWORKS, 2017, 91 : 42 - 54
  • [38] A deep learning approach for acute liver failure prediction with combined fully connected and convolutional neural networks
    Xie, Hefu
    Wang, Bingbing
    Hong, Yuanzhen
    TECHNOLOGY AND HEALTH CARE, 2024, 32 : S555 - S564
  • [39] EA-CG: An Approximate Second-Order Method for Training Fully-Connected Neural Networks
    Chen, Sheng-Wei
    Chou, Chun-Nan
    Chang, Edward Y.
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3337 - 3346
  • [40] Fully forward mode training for optical neural networks
    Xue, Zhiwei
    Zhou, Tiankuang
    Xu, Zhihao
    Yu, Shaoliang
    Dai, Qionghai
    Fang, Lu
    NATURE, 2024, 632 (8024) : 280 - 286