The effect of network topologies on fully decentralized learning: a preliminary investigation

被引:2
|
作者
Palmieri, Luigi [1 ]
Valerio, Lorenzo [1 ]
Boldrini, Chiara [1 ]
Passarella, Andrea [1 ]
机构
[1] IIT CNR, Pisa, Italy
来源
PROCEEDINGS OF THE FIRST INTERNATIONAL WORKSHOP ON NETWORKED AI SYSTEMS, NETAISYS 2023 | 2023年
基金
欧盟地平线“2020”;
关键词
decentralized learning; graph topologies; non-IID data;
D O I
10.1145/3597062.3597280
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In a decentralized machine learning system, data is typically partitioned among multiple devices or nodes, each of which trains a local model using its own data. These local models are then shared and combined to create a global model that can make accurate predictions on new data. In this paper, we start exploring the role of the network topology connecting nodes on the performance of a Machine Learning model trained through direct collaboration between nodes. We investigate how different types of topologies impact the "spreading of knowledge", i.e., the ability of nodes to incorporate in their local model the knowledge derived by learning patterns in data available in other nodes across the networks. Specifically, we highlight the different roles in this process of more or less connected nodes (hubs and leaves), as well as that of macroscopic network properties (primarily, degree distribution and modularity). Among others, we show that, while it is known that even weak connectivity among network components is sufficient for information spread, it may not be sufficient for knowledge spread. More intuitively, we also find that hubs have a more significant role than leaves in spreading knowledge, although this manifests itself not only for heavy-tailed distributions but also when "hubs" have only moderately more connections than leaves. Finally, we show that tightly knit communities severely hinder knowledge spread.
引用
收藏
页码:25 / 30
页数:6
相关论文
共 50 条
  • [31] Network Gradient Descent Algorithm for Decentralized Federated Learning
    Wu, Shuyuan
    Huang, Danyang
    Wang, Hansheng
    JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2023, 41 (03) : 806 - 818
  • [32] A Graph Neural Network Based Decentralized Learning Scheme
    Gao, Huiguo
    Lee, Mengyuan
    Yu, Guanding
    Zhou, Zhaolin
    SENSORS, 2022, 22 (03)
  • [33] Impact of network topology on the performance of Decentralized Federated Learning
    Palmieri, Luigi
    Boldrini, Chiara
    Valerio, Lorenzo
    Passarella, Andrea
    Conti, Marco
    COMPUTER NETWORKS, 2024, 253
  • [34] (POSTER) Network Testbed for Experimenting With Decentralized Federated Learning
    Mandal, Anirban
    Thareja, Komal
    Ruth, Paul
    2024 20TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING IN SMART SYSTEMS AND THE INTERNET OF THINGS, DCOSS-IOT 2024, 2024, : 768 - 770
  • [35] Blind Learning of Tree Network Topologies in the Presence of Hidden Nodes
    Sepehr, Firoozeh
    Materassi, Donatello
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (03) : 1014 - 1028
  • [37] Dual-Based Online Learning of Dynamic Network Topologies
    Saboksayr, Seyed Saman
    Mateos, Gonzalo
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2023,
  • [38] Parametric investigation on the effect of channel topologies on electrophoretic separations
    Lim, DSW
    Kuo, JS
    Chiu, DT
    JOURNAL OF CHROMATOGRAPHY A, 2004, 1027 (1-2) : 237 - 244
  • [39] Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting
    Chen, Yicheng
    Blum, Rick S.
    Sadler, Brian M.
    2022 56TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2022, : 96 - 100
  • [40] Provably Efficient Multi-Agent Reinforcement Learning with Fully Decentralized Communication
    Lidard, Justin
    Madhushani, Udari
    Leonard, Naomi Ehrich
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 3311 - 3316