Federated Unlearning via Class-Discriminative Pruning

被引:46
|
作者
Wang, Junxiao [1 ]
Song Guo [1 ]
Xin Xie [1 ]
Heng Qi [2 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[2] Dalian Univ Technol, Dalian, Peoples R China
基金
中国国家自然科学基金;
关键词
federated learning; machine unlearning; channel pruning;
D O I
10.1145/3485447.3512222
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We explore the problem of selectively forgetting categories from trained CNN classification models in federated learning (FL). Given that the data used for training cannot be accessed globally in FL, our insights probe deep into the internal influence of each channel. Through the visualization of feature maps activated by different channels, we observe that different channels have a varying contribution to different categories in image classification. Inspired by this, we propose a method for scrubbing the model cleanly of information about particular categories. The method does not require retraining from scratch, nor global access to the data used for training. Instead, we introduce the concept of Term Frequency Inverse Document Frequency (TF-IDF) to quantize the class discrimination of channels. Channels with high TF-IDF scores have more discrimination on the target categories and thus need to be pruned to unlearn. The channel pruning is followed by a finetuning process to recover the performance of the pruned model. Evaluated on CIFAR10 dataset, our method accelerates the speed of unlearning by 8.9x for the ResNet model, and 7.9x for the VGG model under no degradation in accuracy, compared to retraining from scratch. For CIFAR100 dataset, the speedups are 9.9x and 8.4x, respectively. We envision this work as a complementary block for FL towards compliance with legal and ethical criteria.
引用
收藏
页码:622 / 632
页数:11
相关论文
共 50 条
  • [31] Goldfish: An Efficient Federated Unlearning Framework
    Wang, Houzhe
    Zhu, Xiaojie
    Chen, Chi
    Esteves-Verissimo, Paulo
    2024 54TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS, DSN 2024, 2024, : 252 - 264
  • [32] Semi-Supervised Domain Adaptation Using Explicit Class-Wise Matching for Domain-Invariant and Class-Discriminative Feature Learning
    Ngo, Ba Hung
    Park, Jae Hyeon
    Cho, Sung In
    IEEE ACCESS, 2021, 9 : 128467 - 128480
  • [33] Defending against gradient inversion attacks in federated learning via statistical machine unlearning
    Gao, Kun
    Zhu, Tianqing
    Ye, Dayong
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [34] Communication-efficient federated learning via personalized filter pruning
    Min, Qi
    Luo, Fei
    Dong, Wenbo
    Gu, Chunhua
    Ding, Weichao
    INFORMATION SCIENCES, 2024, 678
  • [35] Incentive Mechanism Design for Federated Learning and Unlearning
    Ding, Ningning
    Sun, Zhenyu
    Wei, Ermin
    Berry, Randall
    PROCEEDINGS OF THE 2023 INTERNATIONAL SYMPOSIUM ON THEORY, ALGORITHMIC FOUNDATIONS, AND PROTOCOL DESIGN FOR MOBILE NETWORKS AND MOBILE COMPUTING, MOBIHOC 2023, 2023, : 11 - 20
  • [36] Adaptive Clipping and Distillation Enabled Federated Unlearning
    Xie, Zhiqiang
    Gao, Zhipeng
    Lin, Yijing
    Zhao, Chen
    Yu, Xinlei
    Chai, Ze
    2024 IEEE INTERNATIONAL CONFERENCE ON WEB SERVICES, ICWS 2024, 2024, : 748 - 756
  • [37] Federated Unlearning: Guarantee the Right of Clients to Forget
    Wu, Leijie
    Guo, Song
    Wang, Junxiao
    Hong, Zicong
    Zhang, Jie
    Ding, Yaohong
    IEEE NETWORK, 2022, 36 (05): : 129 - 135
  • [38] Vertical Federated Unlearning on the Logistic Regression Model
    Deng, Zihao
    Han, Zhaoyang
    Ma, Chuan
    Ding, Ming
    Yuan, Long
    Ge, Chunpeng
    Liu, Zhe
    ELECTRONICS, 2023, 12 (14)
  • [39] An Empirical Study of Federated Unlearning: Efficiency and Effectiveness
    Thai-Hung Nguyen
    Hong-Phuc Vu
    Dung Thuy Nguyen
    Tuan Minh Nguyen
    Doan, Khoa D.
    Wong, Kok-Seng
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222
  • [40] Efficient federated unlearning under plausible deniability
    Varshney, Ayush K.
    Torra, Vicenc
    MACHINE LEARNING, 2025, 114 (01)