Visualization for Federated Learning: Challenges and Framework

被引:0
|
作者
Pan R. [1 ,2 ]
Han D. [1 ,2 ]
Pan J. [1 ,2 ]
Zhou S. [1 ]
Wei Y. [1 ]
Mei H. [1 ]
Chen W. [1 ]
机构
[1] State Key Laboratory of CAD&CG, Zhejiang University, Hangzhou
[2] Zhejiang Lab, Hangzhou
关键词
Anomaly detection; Data privacy; Explainable machine learning; Federated learning;
D O I
10.3724/SP.J.1089.2020.18172
中图分类号
学科分类号
摘要
Federated learning (FL) is a distributed machine learning solution, which guarantees data privacy and security. Similar to the problem of the interpretability of traditional machine learning, how to explain FL is a new challenge. Based on the characteristics of the distribution and privacy security of FL methods, this paper explores how to design the visualization framework of FL. Traditional visualization tasks often require a large amount of data. However, the privacy feature of FL determines that it cannot retrieve clients' data. Therefore, the available data mainly comes from the training process of the server-side, including server-side model parameters and client training status. Based on the analysis of the challenge of the interpretability of FL, this paper designs a visualization framework that takes into account the clients, the server-side, and FL models. The framework consists of the classical FL model, data storage center, data processing module and visual analysis interface. Finally, this article introduces two existing visualization cases and discusses a more general visual analysis method in the future. © 2020, Beijing China Science Journal Publishing Co. Ltd. All right reserved.
引用
收藏
页码:513 / 519
页数:6
相关论文
共 32 条
  • [1] Zhou Zhihua, Machine learning, pp. 1-18, (2016)
  • [2] Tuttle H., Facebook scandal raises data privacy concerns, Risk Management, 65, 5, pp. 6-9, (2018)
  • [3] Houser K A, Gregory Voss W., GDPR: the end of Google and Facebook or a new paradigm in data privacy
  • [4] McMahan B, Ramage D., Federated learning: collaborative machine learning without centralized training data
  • [5] Yosinski J, Clune J, Nguyen A, Et al., Understanding neural networks through deep visualization
  • [6] Du M N, Liu N H, Hu X., Techniques for interpretable machine learning
  • [7] Endert A, Ribarsky W, Turkay C, Et al., The state of the art in integrating machine learning into visual analytics, Computer Graphics Forum, 36, 8, pp. 458-486, (2017)
  • [8] Gunning D, Aha D W., DARPA's explainable artificial intelligence program, AI Magazine, 40, 2, pp. 44-58, (2019)
  • [9] Muhlbacher T, Piringer H, Gratzl S, Et al., Opening the black box: strategies for increased user involvement in existing algorithm implementations, IEEE Transactions on Visualization and Computer Graphics, 20, 12, pp. 1643-1652, (2014)
  • [10] Rauber P E, Fadel S G, Falcao A X, Et al., Visualizing the hidden activity of artificial neural networks, IEEE Transactions on Visualization and Computer Graphics, 23, 1, pp. 101-110, (2017)