Machine Unlearning by Reversing the Continual Learning

被引:2
|
作者
Zhang, Yongjing [1 ]
Lu, Zhaobo [2 ]
Zhang, Feng [1 ]
Wang, Hao [1 ]
Li, Shaojing [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Qufu Normal Univ, Sch Comp Sci, Rizhao 276826, Peoples R China
[3] Qingdao Agr Univ, Coll Sci & Informat, Qingdao 266109, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 16期
关键词
machine unlearning; continual learning; elastic weight consolidation; decreasing moment matching;
D O I
10.3390/app13169341
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Recent legislations, such as the European General Data Protection Regulation (GDPR), require user data holders to guarantee the individual's right to be forgotten. This means that user data holders must completely delete user data upon request. However, in the field of machine learning, it is not possible to simply remove these data from the back-end database wherein the training dataset is stored, because the machine learning model still retains this data information. Retraining the model using a dataset with these data removed can overcome this problem; however, this can lead to expensive computational overheads. In order to remedy this shortcoming, we propose two effective methods to help model owners or data holders remove private data from a trained model. The first method uses an elastic weight consolidation (EWC) constraint term and a modified loss function to neutralize the data to be removed. The second method approximates the posterior distribution of the model as a Gaussian distribution, and the model after unlearning is computed by decreasingly matching the moment (DMM) of the posterior distribution of the neural network trained on all data and the data to be removed. Finally, we conducted experiments on three standard datasets using backdoor attacks as the evaluation metric. The results show that both methods are effective in removing backdoor triggers in deep learning models. Specifically, EWC can reduce the success rate of backdoor attacks to 0. IMM can ensure that the model prediction accuracy is higher than 80% and keep the success rate of backdoor attacks below 10%.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] CONTINUAL LEARNING AND PRIVATE UNLEARNING
    Liu, Bo
    Liu, Qiang
    Stone, Peter
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [2] Unlearning during Learning: An Efficient Federated Machine Unlearning Method
    Gul, Hanlin
    Zhu, Gongxi
    Zhang, Jie
    Zhao, Xinyuan
    Han, Yuxing
    Fan, Lixin
    Yang, Qiang
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 4035 - 4043
  • [3] Learning to Unlearn for Robust Machine Unlearning
    Huang, Mark He
    Foo, Lin Geng
    Liu, Jun
    COMPUTER VISION - ECCV 2024, PT LII, 2025, 15110 : 202 - 219
  • [4] Clinical applications of continual learning machine learning
    Lee, Cecilia S.
    Lee, Aaron Y.
    LANCET DIGITAL HEALTH, 2020, 2 (06): : E279 - E281
  • [5] Continual Learning for Neural Machine Translation
    Cao, Yue
    Wei, Hao-Ran
    Chen, Boxing
    Wan, Xiaojun
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3964 - 3974
  • [6] Machine unlearning
    Agarwal, Shubham
    NEW SCIENTIST, 2023, 246 (3463) : 40 - 43
  • [7] FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks
    Zhang, Lefeng
    Zhu, Tianqing
    Zhang, Haibin
    Xiong, Ping
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4732 - 4746
  • [8] Coded Machine Unlearning
    Aldaghri, Nasser
    Mahdavifar, Hessam
    Beirami, Ahmad
    IEEE ACCESS, 2021, 9 : 88137 - 88150
  • [9] Adaptive Machine Unlearning
    Gupta, Varun
    Jung, Christopher
    Neel, Seth
    Roth, Aaron
    Sharifi-Malvajerdi, Saeed
    Waites, Chris
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] A Review on Machine Unlearning
    Zhang H.
    Nakamura T.
    Isohara T.
    Sakurai K.
    SN Computer Science, 4 (4)