Federated Unlearning: Guarantee the Right of Clients to Forget

被引:7
|
作者
Wu, Leijie [1 ]
Guo, Song [1 ]
Wang, Junxiao [2 ]
Hong, Zicong [1 ]
Zhang, Jie [1 ]
Ding, Yaohong [2 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[2] Hong Kong Polytech Univ, Hong Kong, Peoples R China
来源
IEEE NETWORK | 2022年 / 36卷 / 05期
基金
中国国家自然科学基金;
关键词
Federated learning; Computational modeling; Pipelines; Training data; Stochastic processes; Data models; Context modeling;
D O I
10.1109/MNET.001.2200198
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The Right to be Forgotten gives a data owner the right to revoke their data from an entity storing it. In the context of federated learning, the Right to be Forgotten requires that, in addition to the data itself, any influence of the data on the FL model must disappear, a process we call "federated unlearning." The most straightforward and legitimate way to implement federated unlearning is to remove the revoked data and retrain the FL model from scratch. Yet the computational and time overhead associated with fully retraining FL models can be prohibitively expensive. In this article, we take the first step to comprehensively investigate the way to settle the unlearning paradigm in the context of federated learning. First, we define the problem of efficient federated unlearning, including its challenges and goals, and we identify three common types of federated unlearning requests: class unlearning, client unlearning, and sample unlearning. Based on those challenges and goals, a general pipeline is proposed for federated unlearning for the above three types of requests. We revisit how the training data affects the final FL model performance and thereby empowers the proposed framework with the reverse stochastic gradient ascent (SGA) and elastic weight consolidation (EWC). Various experiments are conducted to verify effectiveness of the proposed method in both aspects of unlearning efficacy and efficiency. We believe the proposed method will perform as an essential component for future machine unlearning systems.
引用
收藏
页码:129 / 135
页数:7
相关论文
共 50 条
  • [1] Algorithms that forget: Machine unlearning and the right to erasure
    Juliussen, Bjorn Aslak
    Rui, Jon Petter
    Johansen, Dag
    [J]. COMPUTER LAW & SECURITY REVIEW, 2023, 51
  • [2] FORGET-SVGD: PARTICLE-BASED BAYESIAN FEDERATED UNLEARNING
    Gong, Jinu
    Kang, Joonhyuk
    Simeone, Osvaldo
    Kassab, Rahif
    [J]. 2022 IEEE DATA SCIENCE AND LEARNING WORKSHOP (DSLW), 2022,
  • [3] How to Forget Clients in Federated Online Learning to Rank?
    Wang, Shuyi
    Liu, Bing
    Zuccon, Guido
    [J]. ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT III, 2024, 14610 : 105 - 121
  • [4] A Survey on Federated Unlearning
    Wang P.-F.
    Wei Z.-Z.
    Zhou D.-S.
    Song W.
    Xiao Y.-M.
    Sun G.
    Yu S.
    Zhang Q.
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (02): : 398 - 422
  • [5] Federated Unlearning With Momentum Degradation
    Zhao, Yian
    Wang, Pengfei
    Qi, Heng
    Huang, Jianguo
    Wei, Zongzheng
    Zhang, Qiang
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (05): : 8860 - 8870
  • [6] Federated Unlearning for Medical Image Analysis
    Zhong, Yuyao
    [J]. FOURTH SYMPOSIUM ON PATTERN RECOGNITION AND APPLICATIONS, SPRA 2023, 2024, 13162
  • [7] Federated Unlearning and Its Privacy Threats
    Wang, Fei
    Li, Baochun
    Li, Bo
    [J]. IEEE NETWORK, 2024, 38 (02): : 294 - 300
  • [8] Communication Efficient and Provable Federated Unlearning
    Tao, Youming
    Wang, Cheng-Long
    Pan, Miao
    Yu, Dongxiao
    Cheng, Xiuzhen
    Wang, Di
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (05): : 1119 - 1131
  • [9] Towards Making Systems Forget with Machine Unlearning
    Cao, Yinzhi
    Yang, Junfeng
    [J]. 2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015, 2015, : 463 - 480
  • [10] Forgetting to learn and learning to forget: the call for organizational unlearning
    Mull, Mandolen
    Duffy, Clayton
    Silberman, Dave
    [J]. EUROPEAN JOURNAL OF TRAINING AND DEVELOPMENT, 2023, 47 (5/6) : 586 - 598