Fast Yet Effective Machine Unlearning

被引:29
|
作者
Tarun, Ayush K. [1 ]
Chundawat, Vikram S. [1 ]
Mandal, Murari [2 ,3 ]
Kankanhalli, Mohan [4 ]
机构
[1] Mavvex Labs, Faridabad 121001, India
[2] Natl Univ Singapore, Sch Comp, Singapore 117417, Singapore
[3] Kalinga Inst Ind Technol KIIT, Sch Comp Engn, Bhubaneswar 751024, India
[4] Natl Univ Singapore NUS, Sch Comp, Singapore 117417, Singapore
基金
新加坡国家研究基金会;
关键词
Data models; Training; Data privacy; Deep learning; Task analysis; Privacy; Training data; forgetting; machine unlearning; privacy in artificial intelligence (AI);
D O I
10.1109/TNNLS.2023.3266233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unlearning the data observed during the training of a machine learning (ML) model is an important task that can play a pivotal role in fortifying the privacy and security of ML-based applications. This article raises the following questions: 1) can we unlearn a single or multiple class(es) of data from an ML model without looking at the full training data even once? and 2) can we make the process of unlearning fast and scalable to large datasets, and generalize it to different deep networks? We introduce a novel machine unlearning framework with error-maximizing noise generation and impair-repair based weight manipulation that offers an efficient solution to the above questions. An error-maximizing noise matrix is learned for the class to be unlearned using the original model. The noise matrix is used to manipulate the model weights to unlearn the targeted class of data. We introduce impair and repair steps for a controlled manipulation of the network weights. In the impair step, the noise matrix along with a very high learning rate is used to induce sharp unlearning in the model. Thereafter, the repair step is used to regain the overall performance. With very few update steps, we show excellent unlearning while substantially retaining the overall model accuracy. Unlearning multiple classes requires a similar number of update steps as for a single class, making our approach scalable to large problems. Our method is quite efficient in comparison to the existing methods, works for multiclass unlearning, does not put any constraints on the original optimization mechanism or network design, and works well in both small and large-scale vision tasks. This work is an important step toward fast and easy implementation of unlearning in deep networks. Source code: https://github.com/vikram2000b/Fast-Machine-Unlearning.
引用
收藏
页码:13046 / 13055
页数:10
相关论文
共 50 条
  • [31] Machine Unlearning Method Based On Projection Residual
    Cao, Zihao
    Wang, Jianzong
    Si, Shijing
    Huang, Zhangcheng
    Xiao, Jing
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 270 - 277
  • [32] On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning
    Thudi, Anvith
    Jia, Hengrui
    Shumailov, Ilia
    Papernot, Nicolas
    PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 4007 - 4022
  • [33] PS plus : A Simple yet Effective Framework for Fast Training on Parameter Server
    Jin, A-Long
    Xu, Wenchao
    Guo, Song
    Hu, Bing
    Yeung, Kwan
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 4625 - 4637
  • [34] Simple, Yet Fast and Effective Two-Phase Method for Nurse Rostering
    Guessoum F.
    Haddadi S.
    Gattal E.
    American Journal of Mathematical and Management Sciences, 2020, 39 (01) : 1 - 19
  • [35] Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning
    Hu, Hongsheng
    Wang, Shuo
    Dong, Tian
    Xue, Minhui
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 3257 - 3275
  • [36] Rethinking machine unlearning for large language models
    Liu, Sijia
    Yao, Yuanshun
    Jia, Jinghan
    Casper, Stephen
    Baracaldo, Nathalie
    Hase, Peter
    Yao, Yuguang
    Liu, Chris Yuhao
    Xu, Xiaojun
    Li, Hang
    Varshney, Kush R.
    Bansal, Mohit
    Koyejo, Sanmi
    Liu, Yang
    NATURE MACHINE INTELLIGENCE, 2025, 7 (02) : 181 - 194
  • [37] Privacy preserving machine unlearning for smart cities
    Chen, Kongyang
    Huang, Yao
    Wang, Yiwen
    Zhang, Xiaoxue
    Mi, Bing
    Wang, Yu
    ANNALS OF TELECOMMUNICATIONS, 2024, 79 (1-2) : 61 - 72
  • [38] Machine Unlearning in Gradient Boosting Decision Trees
    Lin, Huawei
    Chung, Jun Woo
    Lao, Yingjie
    Zhao, Weijie
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 1374 - 1383
  • [39] Towards Making Systems Forget with Machine Unlearning
    Cao, Yinzhi
    Yang, Junfeng
    2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015, 2015, : 463 - 480
  • [40] MUter: Machine Unlearning on Adversarially Trained Models
    Liu, Junxu
    Xue, Mingsheng
    Lou, Jian
    Zhang, Xiaoyu
    Xiong, Li
    Qin, Zhan
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4869 - 4879