Function Placement for In-network Federated Learning

被引:0
|
作者
Yellas, Nour-El-Houda [1 ,2 ]
Addis, Bernardetta [3 ]
Boumerdassi, Selma [1 ]
Riggio, Roberto [4 ]
Secci, Stefano [1 ]
机构
[1] Cnam, Paris, France
[2] Orange, Chatillon, France
[3] Univ Lorraine, CNRS, LORIA, Nancy, France
[4] Polytech Univ Marche, Ancona, Italy
基金
欧盟地平线“2020”;
关键词
Federated learning; Artificial intelligence functions; Placement; EDGE INTELLIGENCE; CLIENT SELECTION; FRAMEWORK;
D O I
10.1016/j.comnet.2024.110900
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL), particularly when data is distributed across multiple clients, helps reducing the learning time by avoiding training on a massive pile-up of data. Nonetheless, low computation capacities or poor network conditions can worsen the convergence time, therefore decreasing accuracy and learning performance. In this paper, we propose a framework to deploy FL clients in a network, while compensating end-to-end time variation due to heterogeneous network setting. We present a new distributed learning control scheme, named In-network Federated Learning Control (IFLC), to support the operations of distributed federated learning functions in geographically distributed networks, and designed to mitigate the stragglers with lower deployment costs. IFLC adapts the allocation of distributed hardware accelerators to modulate the importance of local training latency in the end-to-end delay of federated learning applications, considering both deterministic and stochastic delay scenarios. By extensive simulation on realistic instances of an in-network anomaly detection application, we show that the absence of hardware accelerators can strongly impair the learning efficiency. Additionally, we show that providing hardware accelerators at only 50% of the nodes, can reduce the number of stragglers by at least 50% and up to 100% with respect to a baseline FIRST-FIT algorithm, while also lowering the deployment cost by up to 30% with respect to the case without hardware accelerators. Finally, we explore the effect of topology changes on IFLC across both hierarchical and flat topologies.
引用
收藏
页数:18
相关论文
共 50 条
  • [11] Expediting In-Network Federated Learning by Voting-Based Consensus Model Compression
    Su, Xiaoxin
    Zhou, Yipeng
    Cui, Laizhong
    Guo, Song
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 1271 - 1280
  • [12] Automated Placement of In-Network ACL Rules
    Zahwa, Wafik
    Lahmadi, Abdelkader
    Rusinowitch, Michael
    Ayadi, Mondher
    2023 IEEE 9TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION, NETSOFT, 2023, : 486 - 491
  • [13] Age-Based Federated Learning Approach to In-Network Caching: An Online Scheduling Policy
    Cao, Yuwen
    Maghsudi, Setareh
    Ohtsuki, Tomoaki
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 1437 - 1442
  • [14] Eliminating Communication Bottlenecks in Cross-Device Federated Learning with In-Network Processing at the Edge
    Luo, Shouxi
    Fan, Pingzhi
    Xing, Huanlai
    Luo, Long
    Yu, Hongfang
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4601 - 4606
  • [15] In-Network Computation for Large-Scale Federated Learning Over Wireless Edge Networks
    Dinh, Thinh Quang
    Nguyen, Diep N.
    Hoang, Dinh Thai
    Pham, Tran Vu
    Dutkiewicz, Eryk
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (10) : 5918 - 5932
  • [16] Optimal In-network Cache Allocation and Content Placement
    Azimdoost, Bita
    Farhadi, Golnaz
    Abani, Noor
    Ito, Akira
    2015 IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2015, : 263 - 268
  • [17] Energy-Efficient Federated Learning for Internet of Things: Leveraging In-Network Processing and Hierarchical Clustering
    Baqer, M.
    FUTURE INTERNET, 2025, 17 (01)
  • [18] On Scalable In-Network Operator Placement for Edge Computing
    Gedeon, Julien
    Stein, Michael
    Wang, Lin
    Muehlhaeuser, Max
    2018 27TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND NETWORKS (ICCCN), 2018,
  • [19] Parallel Placement of Virtualized Network Functions via Federated Deep Reinforcement Learning
    Huang, Haojun
    Tian, Jialin
    Min, Geyong
    Yin, Hao
    Zeng, Cheng
    Zhao, Yangming
    Wu, Dapeng Oliver
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (04) : 2936 - 2949
  • [20] A computational model to support in-network data analysis in federated ecosystems
    Zamani, Ali Reza
    Zou, Mengsong
    Diaz-Montes, Javier
    Petri, Loan
    Rana, Omer
    Parashar, Manish
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 80 : 342 - 354