Fast Yet Effective Machine Unlearning

被引:29
|
作者
Tarun, Ayush K. [1 ]
Chundawat, Vikram S. [1 ]
Mandal, Murari [2 ,3 ]
Kankanhalli, Mohan [4 ]
机构
[1] Mavvex Labs, Faridabad 121001, India
[2] Natl Univ Singapore, Sch Comp, Singapore 117417, Singapore
[3] Kalinga Inst Ind Technol KIIT, Sch Comp Engn, Bhubaneswar 751024, India
[4] Natl Univ Singapore NUS, Sch Comp, Singapore 117417, Singapore
基金
新加坡国家研究基金会;
关键词
Data models; Training; Data privacy; Deep learning; Task analysis; Privacy; Training data; forgetting; machine unlearning; privacy in artificial intelligence (AI);
D O I
10.1109/TNNLS.2023.3266233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unlearning the data observed during the training of a machine learning (ML) model is an important task that can play a pivotal role in fortifying the privacy and security of ML-based applications. This article raises the following questions: 1) can we unlearn a single or multiple class(es) of data from an ML model without looking at the full training data even once? and 2) can we make the process of unlearning fast and scalable to large datasets, and generalize it to different deep networks? We introduce a novel machine unlearning framework with error-maximizing noise generation and impair-repair based weight manipulation that offers an efficient solution to the above questions. An error-maximizing noise matrix is learned for the class to be unlearned using the original model. The noise matrix is used to manipulate the model weights to unlearn the targeted class of data. We introduce impair and repair steps for a controlled manipulation of the network weights. In the impair step, the noise matrix along with a very high learning rate is used to induce sharp unlearning in the model. Thereafter, the repair step is used to regain the overall performance. With very few update steps, we show excellent unlearning while substantially retaining the overall model accuracy. Unlearning multiple classes requires a similar number of update steps as for a single class, making our approach scalable to large problems. Our method is quite efficient in comparison to the existing methods, works for multiclass unlearning, does not put any constraints on the original optimization mechanism or network design, and works well in both small and large-scale vision tasks. This work is an important step toward fast and easy implementation of unlearning in deep networks. Source code: https://github.com/vikram2000b/Fast-Machine-Unlearning.
引用
收藏
页码:13046 / 13055
页数:10
相关论文
共 50 条
  • [41] Supporting Trustworthy AI Through Machine Unlearning
    Hine, Emmie
    Novelli, Claudio
    Taddeo, Mariarosaria
    Floridi, Luciano
    SCIENCE AND ENGINEERING ETHICS, 2024, 30 (05)
  • [42] Machine Unlearning: Challenges in Data Quality and Access
    Xu, Miao
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8589 - 8594
  • [43] Fast and effective worm fingerprinting via machine learning
    Yang, Stewart
    Song, Jianping
    Rajamanij, Harish
    Cho, Taewon
    Zhang, Yin
    Mooney, Raymond
    3rd International Conference on Autonomic Computing, Proceedings, 2005, : 311 - 313
  • [44] Model Sparsity Can Simplify Machine Unlearning
    Jia, Jinghan
    Liu, Jiancheng
    Ram, Parikshit
    Yao, Yuguang
    Liu, Gaowen
    Liu, Yang
    Sharma, Pranay
    Liu, Sijia
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [45] Privacy preserving machine unlearning for smart cities
    Kongyang Chen
    Yao Huang
    Yiwen Wang
    Xiaoxue Zhang
    Bing Mi
    Yu Wang
    Annals of Telecommunications, 2024, 79 : 61 - 72
  • [46] Algorithms that forget: Machine unlearning and the right to erasure
    Juliussen, Bjorn Aslak
    Rui, Jon Petter
    Johansen, Dag
    COMPUTER LAW & SECURITY REVIEW, 2023, 51
  • [47] Evaluating Machine Unlearning: Applications, Approaches, and Accuracy
    Ali, Zulfiqar
    Muhammad, Asif
    Adnan, Rubina
    Alkhalifah, Tamim
    Aslam, Sheraz
    ENGINEERING REPORTS, 2025, 7 (01)
  • [48] Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten
    Quoc Phong Nguyen
    Oikawa, Ryutaro
    Divakaran, Dinil Mon
    Chan, Mun Choon
    Low, Bryan Kian Hsiang
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 351 - 363
  • [49] Efficient Vertical Federated Unlearning via Fast Retraining
    Wang, Zichen
    Gao, Xiangshan
    Wang, Cong
    Cheng, Peng
    Chen, Jiming
    ACM TRANSACTIONS ON INTERNET TECHNOLOGY, 2024, 24 (02) : 1 - 22
  • [50] Closed-form Machine Unlearning for Matrix Factorization
    Zhang, Shuijing
    Lou, Jian
    Xiong, Li
    Zhang, Xiaoyu
    Liu, Jing
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3278 - 3287