Machine Unlearning by Reversing the Continual Learning

被引:2
|
作者
Zhang, Yongjing [1 ]
Lu, Zhaobo [2 ]
Zhang, Feng [1 ]
Wang, Hao [1 ]
Li, Shaojing [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Qufu Normal Univ, Sch Comp Sci, Rizhao 276826, Peoples R China
[3] Qingdao Agr Univ, Coll Sci & Informat, Qingdao 266109, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 16期
关键词
machine unlearning; continual learning; elastic weight consolidation; decreasing moment matching;
D O I
10.3390/app13169341
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Recent legislations, such as the European General Data Protection Regulation (GDPR), require user data holders to guarantee the individual's right to be forgotten. This means that user data holders must completely delete user data upon request. However, in the field of machine learning, it is not possible to simply remove these data from the back-end database wherein the training dataset is stored, because the machine learning model still retains this data information. Retraining the model using a dataset with these data removed can overcome this problem; however, this can lead to expensive computational overheads. In order to remedy this shortcoming, we propose two effective methods to help model owners or data holders remove private data from a trained model. The first method uses an elastic weight consolidation (EWC) constraint term and a modified loss function to neutralize the data to be removed. The second method approximates the posterior distribution of the model as a Gaussian distribution, and the model after unlearning is computed by decreasingly matching the moment (DMM) of the posterior distribution of the neural network trained on all data and the data to be removed. Finally, we conducted experiments on three standard datasets using backdoor attacks as the evaluation metric. The results show that both methods are effective in removing backdoor triggers in deep learning models. Specifically, EWC can reduce the success rate of backdoor attacks to 0. IMM can ensure that the model prediction accuracy is higher than 80% and keep the success rate of backdoor attacks below 10%.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Effective Machine Learning-based Access Control Administration through Unlearning
    Llamas, Javier Martinez
    Preuveneers, Davy
    Joosen, Wouter
    2023 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS, EUROS&PW, 2023, : 50 - 57
  • [32] FRAMU: Attention-Based Machine Unlearning Using Federated Reinforcement Learning
    Shaik, Thanveer
    Tao, Xiaohui
    Li, Lin
    Xie, Haoran
    Cai, Taotao
    Zhu, Xiaofeng
    Li, Qing
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (10) : 5153 - 5167
  • [33] Certified unlearning for a trustworthy machine learning-based access control administration
    Llamas, Javier Martinez
    Preuveneers, Davy
    Joosen, Wouter
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2025, 24 (02)
  • [34] Learning, unlearning and relearning
    Baker, Paul A.
    Greif, Robert T.
    PEDIATRIC ANESTHESIA, 2020, 30 (03) : 204 - 206
  • [35] SCU: An Efficient Machine Unlearning Scheme for Deep Learning Enabled Semantic Communications
    Wang, Weiqi
    Tian, Zhiyi
    Zhang, Chenhan
    Yu, Shui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 547 - 558
  • [36] Design and Validation of Reversing Assistant Based on Extreme Learning Machine
    Di, Huanyu
    Yan, Yipeng
    Zhao, Mingxin
    Kang, Mingxin
    FRONTIERS IN ENERGY RESEARCH, 2022, 10
  • [37] Unlearning as (Japanese) learning
    Nishihira, Tadashi
    Rappleye, Jeremy
    EDUCATIONAL PHILOSOPHY AND THEORY, 2022, 54 (09) : 1332 - 1344
  • [39] Continual Active Learning for Efficient Adaptation of Machine Learning Models to Changing Image Acquisition
    Perkonigg, Matthias
    Hofmanninger, Johannes
    Langs, Georg
    INFORMATION PROCESSING IN MEDICAL IMAGING, IPMI 2021, 2021, 12729 : 649 - 660
  • [40] EdDSA Shield: Fortifying Machine Learning Against Data Poisoning Threats in Continual Learning
    Nageswari, Akula
    Sanjeevulu, Vasundra
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON DATA SCIENCE, MACHINE LEARNING AND APPLICATIONS, VOL 1, ICDSMLA 2023, 2025, 1273 : 1018 - 1028