Machine Unlearning by Reversing the Continual Learning

被引:2
|
作者
Zhang, Yongjing [1 ]
Lu, Zhaobo [2 ]
Zhang, Feng [1 ]
Wang, Hao [1 ]
Li, Shaojing [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Qufu Normal Univ, Sch Comp Sci, Rizhao 276826, Peoples R China
[3] Qingdao Agr Univ, Coll Sci & Informat, Qingdao 266109, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 16期
关键词
machine unlearning; continual learning; elastic weight consolidation; decreasing moment matching;
D O I
10.3390/app13169341
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Recent legislations, such as the European General Data Protection Regulation (GDPR), require user data holders to guarantee the individual's right to be forgotten. This means that user data holders must completely delete user data upon request. However, in the field of machine learning, it is not possible to simply remove these data from the back-end database wherein the training dataset is stored, because the machine learning model still retains this data information. Retraining the model using a dataset with these data removed can overcome this problem; however, this can lead to expensive computational overheads. In order to remedy this shortcoming, we propose two effective methods to help model owners or data holders remove private data from a trained model. The first method uses an elastic weight consolidation (EWC) constraint term and a modified loss function to neutralize the data to be removed. The second method approximates the posterior distribution of the model as a Gaussian distribution, and the model after unlearning is computed by decreasingly matching the moment (DMM) of the posterior distribution of the neural network trained on all data and the data to be removed. Finally, we conducted experiments on three standard datasets using backdoor attacks as the evaluation metric. The results show that both methods are effective in removing backdoor triggers in deep learning models. Specifically, EWC can reduce the success rate of backdoor attacks to 0. IMM can ensure that the model prediction accuracy is higher than 80% and keep the success rate of backdoor attacks below 10%.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution
    Garcia, Xavier
    Constant, Noah
    Parikh, Ankur P.
    Firat, Orhan
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 1184 - 1192
  • [42] Continual learning
    King, Denise
    JOURNAL OF EMERGENCY NURSING, 2008, 34 (04) : 283 - 283
  • [43] CONTINUAL LEARNING
    BROWN, WE
    JOURNAL OF THE AMERICAN DENTAL ASSOCIATION, 1965, 71 (04): : 935 - &
  • [44] Adapting to Changes: A Novel Framework for Continual Machine Learning in Industrial Applications
    Antony, Jibinraj
    Jalusic, Dorotea
    Bergweiler, Simon
    Hajnal, Akos
    Zlabravec, Veronika
    Mark, Emodi
    Strbad, Dejan
    Legler, Tatjana
    Marosi, Attila Csaba
    JOURNAL OF GRID COMPUTING, 2024, 22 (04)
  • [45] Defending against gradient inversion attacks in federated learning via statistical machine unlearning
    Gao, Kun
    Zhu, Tianqing
    Ye, Dayong
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [46] Zero-Shot Machine Unlearning
    Chundawat, Vikram S.
    Tarun, Ayush K.
    Mandal, Murari
    Kankanhalli, Mohan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2345 - 2354
  • [47] Learn to Unlearn: Insights Into Machine Unlearning
    Qu, Youyang
    Yuan, Xin
    Ding, Ming
    Ni, Wei
    Rakotoarivelo, Thierry
    Smith, David
    COMPUTER, 2024, 57 (03) : 79 - 90
  • [48] Fast Yet Effective Machine Unlearning
    Tarun, Ayush K.
    Chundawat, Vikram S.
    Mandal, Murari
    Kankanhalli, Mohan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 13046 - 13055
  • [49] Fast Model Debias with Machine Unlearning
    Chen, Ruizhe
    Yang, Jianfei
    Xiong, Huimin
    Bai, Jianhong
    Hu, Tianxiang
    Hao, Jin
    Feng, Yang
    Zhou, Joey Tianyi
    Wu, Jian
    Liu, Zuozhu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] Backdoor Attacks via Machine Unlearning
    Liu, Zihao
    Wang, Tianhao
    Huai, Mengdi
    Miao, Chenglin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14115 - 14123