Resource-efficient Parallel Split Learning in Heterogeneous Edge Computing

被引:0
|
作者
Zhang, Mingjin [1 ]
Cao, Jiannong [1 ]
Sahni, Yuvraj [1 ]
Chen, Xiangchun [1 ]
Jiang, Shan [1 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
关键词
Edge Computing; Federated Learning; Edge AI; Task Scheduling;
D O I
10.1109/CNC59896.2024.10556386
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Edge AI has been recently proposed to facilitate the training and deployment of Deep Neural Network (DNN) models in proximity to the sources of data. To enable the training of large models on resource-constraint edge devices and protect data privacy, parallel split learning is becoming a practical and popular approach. However, current parallel split learning neglects the resource heterogeneity of edge devices, which may lead to the straggler issue. In this paper, we propose EdgeSplit, a novel parallel split learning framework to better accelerate distributed model training on heterogeneous and resource-constraint edge devices. EdgeSplit enhances the efficiency of model training on less powerful edge devices by adaptively segmenting the model into varying depths. Our approach focuses on reducing total training time by formulating and solving a task scheduling problem, which determines the most efficient model partition points and bandwidth allocation for each device. We employ a straightforward yet effective alternating algorithm for this purpose. Comprehensive tests conducted with a range of DNN models and datasets demonstrate that EdgeSplit not only facilitates the training of large models on resource-restricted edge devices but also surpasses existing baselines in performance.
引用
收藏
页码:794 / 798
页数:5
相关论文
共 50 条
  • [31] Energy-Efficient Resource Allocation for Heterogeneous Edge-Cloud Computing
    Hua, Wei
    Liu, Peng
    Huang, Linyu
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (02) : 2808 - 2818
  • [32] Survey on Heterogeneous Parallel Computing Platform for Edge Intelligent Computing
    Wan, Duo
    Hu, Moufa
    Xiao, Shanzhu
    Zhang, Yan
    Computer Engineering and Applications, 2023, 59 (01): : 15 - 25
  • [33] A Resource-Efficient Computing Paradigm for Computational Protein Modeling Applications
    Li, Yaohang
    Wardell, Douglas
    Freeh, Vincent
    2009 IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL & DISTRIBUTED PROCESSING, VOLS 1-5, 2009, : 1578 - +
  • [34] MobiLipNet: Resource-efficient deep learning based lipreading
    Koumparoulis, Alexandros
    Potamianos, Gerasimos
    INTERSPEECH 2019, 2019, : 2763 - 2767
  • [35] Resource-Efficient Federated Learning for Network Intrusion Detection
    Doriguzzi-Corin, Roberto
    Cretti, Silvio
    Siracusa, Domenico
    2024 IEEE 10TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION, NETSOFT 2024, 2024, : 357 - 362
  • [36] RESOURCE-EFFICIENT FEDERATED LEARNING ROBUST TO COMMUNICATION ERRORS
    Lari, Ehsan
    Gogineni, Vinay Chakravarthi
    Arablouei, Reza
    Werner, Stefan
    2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 265 - 269
  • [37] Resource-efficient and sustainable
    Konstruktion, 2016, 68 (03):
  • [38] Resource-Efficient Wearable Computing for Real-Time Reconfigurable Machine Learning: A Cascading Binary Classification
    Pedram, Mahdi
    Rokni, Seyed Ali
    Nourollahi, Marjan
    Homayoun, Houman
    Ghasemzadeh, Hassan
    2019 IEEE 16TH INTERNATIONAL CONFERENCE ON WEARABLE AND IMPLANTABLE BODY SENSOR NETWORKS (BSN), 2019,
  • [39] FedOPT: federated learning-based heterogeneous resource recommendation and optimization for edge computing
    Ahmed, Syed Thouheed
    Kumar, V. Vinoth
    Mahesh, T. R.
    Prasad, L. V. Narasimha
    Velmurugan, A. K.
    Muthukumaran, V.
    Niveditha, V. R.
    SOFT COMPUTING, 2024,
  • [40] Resource-efficient streaming architecture for ensemble Kalman filters designed for online learning in physical reservoir computing
    Tamada, Kota
    Abe, Yuki
    Asai, Tetsuya
    IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2025, 16 (01): : 120 - 131