CrowdLearning: A Decentralized Distributed Training Framework Based on Collectives of Trusted AIoT Devices

被引:0
|
作者
Wang, Ziqi [1 ]
Liu, Sicong [1 ]
Guo, Bin [1 ]
Yu, Zhiwen [2 ,3 ]
Zhang, Daqing [4 ]
机构
[1] Northwestern Polytech Univ, Minist Ind & Informat Technol, Key Lab Intellectual Percept & Comp, Key Lab Man Machine Object Integrat & Intelligent, Xian 710100, Peoples R China
[2] Northwestern Polytech Univ, Xian 710072, Peoples R China
[3] Harbin Engn Univ, Harbin 150001, Peoples R China
[4] Peking Univ, Beijing 100000, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Mobile handsets; Task analysis; Computational modeling; Distributed databases; Data models; Federated learning; Artificial Internet of Things; distributed training; mobile computing;
D O I
10.1109/TMC.2024.3427636
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rise of Artificial Intelligence of Things (AIoT), integrating deep neural networks (DNNs) into mobile and embedded devices has become a significant trend, enhancing the data collection and analysis capabilities of IoT devices. Traditional integration paradigms rely on cloud-based training and terminal deployment, but they often suffer from delayed model updates, decreased accuracy, and increased communication overhead in dynamic real-world environments. Consequently, on-device training methods have garnered research focus. However, the limited local perception data and computational resources pose bottlenecks to training efficiency. To address these challenges, Federated Learning emerged but faces issues such as slow model convergence and reduced accuracy due to data privacy concerns that restrict sharing data or model details. In contrast, we propose the concept of trusted clusters in the real world (such as personal devices in smart spaces, trusted devices from the same organization/company, etc.), where devices in trusted clusters focus more on computational efficiency and can also share privacy. We propose CrowdLearning, a decentralized distributed training framework based on trusted AIoT device collectives. This framework comprises two collaborative modules: A heterogeneous resource-aware task offloading module aimed at alleviating training latency bottlenecks, and an efficient communication data reallocation module responsible for determining the timing, manner, and recipients of data transmission, thereby enhancing DNN training efficiency and effectiveness. Experimental results demonstrate that in various scenarios, CrowdLearning outperforms existing federated learning and distributed training baselines on devices, reducing training latency by 55.8% and lowering communication costs by 67.1%.
引用
收藏
页码:13420 / 13437
页数:18
相关论文
共 29 条
  • [11] Secure decentralized peer-to-peer training of deep neural networks based on distributed ledger technology
    Fadaeddini, Amin
    Majidi, Babak
    Eshghi, Mohammad
    JOURNAL OF SUPERCOMPUTING, 2020, 76 (12): : 10354 - 10368
  • [12] Globe2Train: A Framework for Distributed ML Model Training using IoT Devices Across the Globe
    Sudharsan, Bharath
    Breslin, John G.
    Ali, Muhammad Intizar
    2021 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, INTERNET OF PEOPLE, AND SMART CITY INNOVATIONS (SMARTWORLD/SCALCOM/UIC/ATC/IOP/SCI 2021), 2021, : 107 - 114
  • [13] An Event-Based Resource Management Framework for Distributed Decision-Making in Decentralized Virtual Power Plants
    Zhang, Jianchao
    Seet, Boon-Chong
    Lie, Tek Tjing
    ENERGIES, 2016, 9 (08):
  • [14] Towards Edge-enabled Distributed Computing Framework for Heterogeneous Android-based Devices
    Yao, Yongtao
    Liu, Bin
    Zhao, Yiwei
    Shi, Weisong
    2022 IEEE/ACM 7TH SYMPOSIUM ON EDGE COMPUTING (SEC 2022), 2022, : 531 - 536
  • [15] Distributed Deep Learning Framework based on Shared Memory for Fast Deep Neural Network Training
    Lim, Eun-Ji
    Ahn, Shin-Young
    Park, Yoo-Mi
    Choi, Wan
    2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC), 2018, : 1239 - 1242
  • [16] A Distributed Computing Framework Based on Variance Reduction Method to Accelerate Training Machine Learning Models
    Huang, Zhen
    Tang, Mingxing
    Liu, Feng
    Qiu, Jinyan
    Zhou, Hangjun
    Yuan, Yuan
    Wang, Changjian
    Li, Dongsheng
    Peng, Yuxing
    2020 IEEE INTERNATIONAL CONFERENCE ON JOINT CLOUD COMPUTING (JCC 2020), 2020, : 30 - 37
  • [17] PUF-Based Lightweight Authentication Framework for Large-Scale IoT Devices in Distributed Cloud
    Li, Dawei
    Liu, Di
    Qi, Yiren
    Liu, Feifei
    Guan, Zhenyu
    Liu, Jianwei
    IEEE NETWORK, 2023, 37 (04): : 56 - 62
  • [18] A Distributed Computing Framework Based on Lightweight Variance Reduction Method to Accelerate Machine Learning Training on Blockchain
    Huang, Zhen
    Liu, Feng
    Tang, Mingxing
    Qiu, Jinyan
    Peng, Yuxing
    CHINA COMMUNICATIONS, 2020, 17 (09) : 77 - 89
  • [19] A Distributed Computing Framework Based on Lightweight Variance Reduction Method to Accelerate Machine Learning Training on Blockchain
    Zhen Huang
    Feng Liu
    Mingxing Tang
    Jinyan Qiu
    Yuxing Peng
    中国通信, 2020, 17 (09) : 77 - 89
  • [20] RETRACTED ARTICLE: An Energy Efficient Framework for Densely Distributed WSNs IoT Devices Based on Tree Based Robust Cluster Head
    S. K. Sathya Lakshmi Preetha
    R. Dhanalakshmi
    R. Kumar
    Wireless Personal Communications, 2018, 103 : 3163 - 3180