High-performance federated continual learning algorithm for heterogeneous streaming data

被引:0
|
作者
Jiang H. [1 ,2 ]
He T. [1 ,2 ]
Liu M. [1 ,2 ,3 ]
Sun S. [1 ]
Wang Y. [1 ,2 ]
机构
[1] Institute of Computing Technology, Chinese Academy of Sciences, Beijing
[2] School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing
[3] Zhongguancun Laboratory, Beijing
来源
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
catastrophic forgetting; federated continual learning; federated learning; heterogeneous data; streaming data;
D O I
10.11959/j.issn.1000-436x.2023102
中图分类号
学科分类号
摘要
Aiming at the problems of poor model performance and low training efficiency in training streaming data of AI models that provide intelligent services, a high-performance federated continual learning algorithm for heterogeneous streaming data (FCL-HSD) was proposed in the distributed terminal system with privacy data. In order to solve the problem of the current model forgetting old data, a model with dynamically extensible structure was introduced in the local training stage, and an extension audit mechanism was designed to ensure the capability of the AI model to recognize old data at the cost of small storage overhead. Considering the heterogeneity of terminal data, a customized global model strategy based on data distribution similarity was designed at the central server side, and an aggregation-by-block manner was implemented for different modules of the model. The feasibility and effectiveness of the proposed algorithm were verified under various data increment scenarios with different data sets. Experimental results show that, compared with existing works, the proposed algorithm can effectively improve the model performance to classify old data on the premise of ensuring the capability to classify new data. © 2023 Editorial Board of Journal on Communications. All rights reserved.
引用
收藏
页码:123 / 136
页数:13
相关论文
共 24 条
  • [1] 2021 white paper on artificial intelligence ecology in China's digital economy era, (2022)
  • [2] BILOGREVIC I, JADLIWALA M, KALKAN K, Et al., Privacy in mobile computing for location-sharing-based services, Inter national Symposium on Privacy Enhancing Technologies Symposium, pp. 77-96, (2011)
  • [3] LIANG X H, LI X, LUAN T H, Et al., Morality-driven data forwarding with privacy preservation in mobile social networks, IEEE Transactions on Vehicular Technology, 61, 7, pp. 3209-3222, (2012)
  • [4] KONECNY J, MCMAHAN H B, RAMAGE D, Et al., Federated optimization: distributed machine learning for on-device intelligence, (2016)
  • [5] LE J Q, LEI X Y, MU N K, Et al., Federated continuous learning with broad network architecture, IEEE Transactions on Cybernetics, 51, 8, pp. 3874-3888, (2021)
  • [6] SERRA J, SURIS D, MIRON M, Et al., Overcoming catastrophic forgetting with hard attention to the task, Proceedings of the International Conference on Machine Learning, pp. 4548-4557, (2018)
  • [7] WU Y, CHEN Y P, WANG L J, Et al., Large scale incremental learning, Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 374-382, (2020)
  • [8] HE K M, ZHANG X Y, REN S Q, Et al., Deep residual learning for image recognition, Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, (2016)
  • [9] CASTRO F M, MARIN-JIMENEZ M J, GUIL N, Et al., End-to-end incremental learning, European Conference on Computer Vision, pp. 241-257, (2018)
  • [10] MASANA M, LIU X L, TWARDOWSKI B, Et al., Class-incremental learning: survey and performance evaluation on image classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 5, pp. 5513-5533, (2023)