FINCH: Enhancing Federated Learning With Hierarchical Neural Architecture Search

被引:5
|
作者
Liu, Jianchun [1 ,2 ]
Yan, Jiaming [1 ,2 ]
Xu, Hongli [1 ,2 ]
Wang, Zhiyuan [1 ,2 ]
Huang, Jinyang [3 ,4 ]
Xu, Yang [1 ,2 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Anhui, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou 215123, Jiangsu, Peoples R China
[3] Hefei Univ Technol, Sch Comp & Informat, Lab S2AC, Hefei 230002, Anhui, Peoples R China
[4] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Hefei 230002, Anhui, Peoples R China
关键词
Computer architecture; Training; Computational modeling; Solid modeling; Servers; Mobile computing; Federated learning; Edge computing; federated learning; Non-IID data; neural architecture search;
D O I
10.1109/TMC.2023.3315451
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) has been widely adopted to train machine learning models over massive data in edge computing. Most works of FL employ pre-defined model architectures on all participating clients for model training. However, these pre-defined architectures may not be the optimal choice for the FL setting since manually designing a high-performance neural architecture is complicated and burdensome with intense human expertise and effort, which easily makes the model training fall into the local suboptimal solution. To this end, Neural Architecture Search (NAS) has been applied to FL to address this critical issue. Unfortunately, the search space of existing federated NAS approaches is extraordinarily large, resulting in unacceptable completion time on the resource-constrained edge clients, especially under the non-independent and identically distributed (non-IID) setting. In order to remedy this, we propose a novel framework, called FINCH, which adopts hierarchical neural architecture search to enhance federated learning. In FINCH, we first divide the clients into several clusters according to the data distribution. Then, some subnets are sampled from a pre-trained supernet and allocated to the specific client clusters for searching the optimal model architecture in parallel, so as to significantly accelerate the process of model searching and training. The extensive experimental results demonstrate the high effectiveness of our proposed framework. Specifically, FINCH can reduce the completion time by about 30.6%, and achieve an average accuracy improvement of around 9.8% compared with the baselines.
引用
收藏
页码:6012 / 6026
页数:15
相关论文
共 50 条
  • [1] From federated learning to federated neural architecture search: a survey
    Zhu, Hangyu
    Zhang, Haoyu
    Jin, Yaochu
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2021, 7 (02) : 639 - 657
  • [2] From federated learning to federated neural architecture search: a survey
    Hangyu Zhu
    Haoyu Zhang
    Yaochu Jin
    [J]. Complex & Intelligent Systems, 2021, 7 : 639 - 657
  • [3] Peaches: Personalized Federated Learning With Neural Architecture Search in Edge Computing
    Yan, Jiaming
    Liu, Jianchun
    Xu, Hongli
    Wang, Zhiyuan
    Qiao, Chunming
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (11) : 10296 - 10312
  • [4] Enhancing Privacy via Hierarchical Federated Learning
    Wainakh, Aidmar
    Guinea, Alejandro Sanchez
    Grube, Tim
    Muhlhauser, Max
    [J]. 2020 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (EUROS&PW 2020), 2020, : 344 - 347
  • [5] Network-aware federated neural architecture search
    Ocal, Goktug
    Ozgovde, Atay
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 162
  • [6] Federated Neural Architecture Search for Medical Data Security
    Liu, Xin
    Zhao, Jianwei
    Li, Jie
    Cao, Bin
    Lv, Zhihan
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (08) : 5628 - 5636
  • [7] Architecture Agnostic Federated Learning for Neural Networks
    Makhija, Disha
    Han, Xing
    Ho, Nhat
    Ghosh, Joydeep
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [8] Optimizing Federated Edge Learning on Non-IID Data via Neural Architecture Search
    Zhang, Feifei
    Ge, Jidong
    Wong, Chifong
    Zhang, Sheng
    Li, Chuanyi
    Luo, Bin
    [J]. 2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [9] FGFL: Fine-Grained Federated Learning Based on Neural Architecture Search for Heterogeneous Clients
    Ying, Weiqin
    Wang, Chixin
    Wu, Yu
    Luo, Xuan
    Wen, Zhe
    Zhang, Han
    [J]. ADVANCES IN SWARM INTELLIGENCE, PT II, ICSI 2024, 2024, 14789 : 99 - 111
  • [10] FedAutoMRI: Federated Neural Architecture Search for MR Image Reconstruction
    Wu, Ruoyou
    Li, Cheng
    Zou, Juan
    Wang, Shanshan
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS, 2023, 14393 : 347 - 356