Multi-level Federated Learning Mechanism with Reinforcement Learning Optimizing in Smart City

被引:3
|
作者
Guo, Shaoyong [1 ]
Xiang, Baoyu [1 ]
Chen, Liandong [2 ]
Yang, Huifeng [2 ]
Yu, Dongxiao [3 ]
机构
[1] Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing 100876, Peoples R China
[2] Stat Grid HeBei Elect Power Co Co Ltd, Shijiazhuang, Peoples R China
[3] Shandong Univ, Sch Comp Sci & Technol, Qingdao, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Smart city; Federated learning; Reinforcement learning; Edge network;
D O I
10.1007/978-3-031-06791-4_35
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While taking account into data privacy protection, federated learning can mine local data knowledge and gather data value, which has been widely concerned by the Smart city and Internet of Things. At present, a large amount of data is generated by the massive edge network in the smart city, but the resources of the edge side are limited. How to reduce the communication overhead between the edge and the centralized cloud server, improve the convergence speed of data model, and avoid resource waste caused by synchronized blocking of federated learning has become the core issue for the integration of federated learning and the Internet of Things in the smart city. For this reason, this paper designs a multi-level federated learning mechanism in the smart city, and uses reinforcement learning agents to select nodes to offset the influence of the non-IID data that is not independent and identically distributed. At the same time, asynchronous non-blocking updating method is used to perform model aggregation and updating of federated learning to release the resources of faster devices and improving the efficiency and stability of federated learning. Finally, simulation results show that the proposed method can improve the efficiency of federated learning tasks in edge network scenarios with a lot of devices in the smart city.
引用
收藏
页码:441 / 454
页数:14
相关论文
共 50 条
  • [1] Towards Cooperative Caching for Vehicular Networks with Multi-level Federated Reinforcement Learning
    Zhao, Lei
    Ran, Yongyi
    Wang, Hao
    Wang, Junxia
    Luo, Jiangtao
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [2] Multi-Level Branched Regularization for Federated Learning
    Kim, Jinkyu
    Kim, Geeho
    Han, Bohyung
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [3] Dynamic heterogeneous federated learning with multi-level prototypes
    Guo, Shunxin
    Wang, Hongsong
    Geng, Xin
    PATTERN RECOGNITION, 2024, 153
  • [4] Optimizing smart city planning: A deep reinforcement learning framework
    Park, Junyoung
    Baek, Jiwoo
    Song, Yujae
    ICT EXPRESS, 2025, 11 (01): : 129 - 134
  • [5] AMFL: Asynchronous Multi-level Federated Learning with Client Selection
    Li, Xuerui
    Zhao, Yangming
    Qiao, Chunming
    2024 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC, 2024,
  • [6] Creating Multi-Level Skill Hierarchies in Reinforcement Learning
    Evans, Joshua B.
    Simsek, Ozgur
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Optimizing Mobile Edge Computing Multi-Level Task Offloading via Deep Reinforcement Learning
    Yan, Peizhi
    Choudhury, Salimur
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [8] Multi-Level Split Federated Learning for Large-Scale AIoT System Based on Smart Cities
    Xu, Hanyue
    Seng, Kah Phooi
    Smith, Jeremy
    Ang, Li Minn
    FUTURE INTERNET, 2024, 16 (03)
  • [9] Optimizing Federated Learning on Non-IID Data with Reinforcement Learning
    Wang, Hao
    Kaplan, Zakhary
    Niu, Di
    Li, Baochun
    IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2020, : 1698 - 1707
  • [10] Multi-Level Policy and Reward Reinforcement Learning for Image Captioning
    Liu, An-An
    Xu, Ning
    Zhang, Hanwang
    Nie, Weizhi
    Su, Yuting
    Zhang, Yongdong
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 821 - 827