Fast Yet Effective Machine Unlearning

被引:29
|
作者
Tarun, Ayush K. [1 ]
Chundawat, Vikram S. [1 ]
Mandal, Murari [2 ,3 ]
Kankanhalli, Mohan [4 ]
机构
[1] Mavvex Labs, Faridabad 121001, India
[2] Natl Univ Singapore, Sch Comp, Singapore 117417, Singapore
[3] Kalinga Inst Ind Technol KIIT, Sch Comp Engn, Bhubaneswar 751024, India
[4] Natl Univ Singapore NUS, Sch Comp, Singapore 117417, Singapore
基金
新加坡国家研究基金会;
关键词
Data models; Training; Data privacy; Deep learning; Task analysis; Privacy; Training data; forgetting; machine unlearning; privacy in artificial intelligence (AI);
D O I
10.1109/TNNLS.2023.3266233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unlearning the data observed during the training of a machine learning (ML) model is an important task that can play a pivotal role in fortifying the privacy and security of ML-based applications. This article raises the following questions: 1) can we unlearn a single or multiple class(es) of data from an ML model without looking at the full training data even once? and 2) can we make the process of unlearning fast and scalable to large datasets, and generalize it to different deep networks? We introduce a novel machine unlearning framework with error-maximizing noise generation and impair-repair based weight manipulation that offers an efficient solution to the above questions. An error-maximizing noise matrix is learned for the class to be unlearned using the original model. The noise matrix is used to manipulate the model weights to unlearn the targeted class of data. We introduce impair and repair steps for a controlled manipulation of the network weights. In the impair step, the noise matrix along with a very high learning rate is used to induce sharp unlearning in the model. Thereafter, the repair step is used to regain the overall performance. With very few update steps, we show excellent unlearning while substantially retaining the overall model accuracy. Unlearning multiple classes requires a similar number of update steps as for a single class, making our approach scalable to large problems. Our method is quite efficient in comparison to the existing methods, works for multiclass unlearning, does not put any constraints on the original optimization mechanism or network design, and works well in both small and large-scale vision tasks. This work is an important step toward fast and easy implementation of unlearning in deep networks. Source code: https://github.com/vikram2000b/Fast-Machine-Unlearning.
引用
收藏
页码:13046 / 13055
页数:10
相关论文
共 50 条
  • [1] Fast Model Debias with Machine Unlearning
    Chen, Ruizhe
    Yang, Jianfei
    Xiong, Huimin
    Bai, Jianhong
    Hu, Tianxiang
    Hao, Jin
    Feng, Yang
    Zhou, Joey Tianyi
    Wu, Jian
    Liu, Zuozhu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Fast Federated Machine Unlearning with Nonlinear Functional Theory
    Che, Tianshi
    Zhou, Yang
    Zhang, Zijie
    Lyu, Lingjuan
    Liu, Ji
    Yan, Da
    Dou, Dejing
    Huan, Jun
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [3] Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer Level Attack and Knowledge Distillation
    Kim, Hyunjune
    Lee, Sangyong
    Woo, Simon S.
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21241 - 21248
  • [4] Machine unlearning
    Agarwal, Shubham
    NEW SCIENTIST, 2023, 246 (3463) : 40 - 43
  • [5] Fast Model Update for IoT Traffic Anomaly Detection With Machine Unlearning
    Fan, Jiamin
    Wu, Kui
    Zhou, Yang
    Zhao, Zhengan
    Huang, Shengqiang
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (10) : 8590 - 8602
  • [6] Fast Machine Unlearning without Retraining through Selective Synaptic Dampening
    Foster, Jack
    Schoepf, Stefan
    Brintrup, Alexandra
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 11, 2024, : 12043 - 12051
  • [7] Coded Machine Unlearning
    Aldaghri, Nasser
    Mahdavifar, Hessam
    Beirami, Ahmad
    IEEE ACCESS, 2021, 9 : 88137 - 88150
  • [8] Adaptive Machine Unlearning
    Gupta, Varun
    Jung, Christopher
    Neel, Seth
    Roth, Aaron
    Sharifi-Malvajerdi, Saeed
    Waites, Chris
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] A Review on Machine Unlearning
    Zhang H.
    Nakamura T.
    Isohara T.
    Sakurai K.
    SN Computer Science, 4 (4)
  • [10] Machine Unlearning: A Survey
    Xu, Heng
    Zhu, Tianqing
    Zhang, Lefeng
    Zhou, Wanlei
    Yu, Philip S.
    ACM COMPUTING SURVEYS, 2024, 56 (01)