Targeted Data Poisoning Attacks Against Continual Learning Neural Networks

被引:2
|
作者
Li, Huayu [1 ]
Ditzler, Gregory [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
continual learning; adversarial machine learning; data poisoning attack;
D O I
10.1109/IJCNN55064.2022.9892774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Modelling Data Poisoning Attacks Against Convolutional Neural Networks
    Jonnalagadda, Annapurna
    Mohanty, Debdeep
    Zakee, Ashraf
    Kamalov, Firuz
    JOURNAL OF INFORMATION & KNOWLEDGE MANAGEMENT, 2024, 23 (02)
  • [2] Defending Against Targeted Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 198 - 207
  • [3] Combining Defences Against Data-Poisoning Based Backdoor Attacks on Neural Networks
    Milakovic, Andrea
    Mayer, Rudolf
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXVI, DBSEC 2022, 2022, 13383 : 28 - 47
  • [4] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [5] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [6] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    International Journal of Data Warehousing and Mining, 2024, 20 (01)
  • [7] Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2022, : 271 - 278
  • [8] Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
    Shan, Shawn
    Bhagoji, Arjun Nitin
    Zheng, Haitao
    Zhao, Ben Y.
    PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 3575 - 3592
  • [9] Detecting Data Poisoning Attacks using Federated Learning with Deep Neural Networks: An Empirical Study
    Alsuwat, Hatim
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (11) : 688 - 698
  • [10] Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
    Shafahi, Ali
    Huang, W. Ronny
    Najibi, Mahyar
    Suciu, Octavian
    Studer, Christoph
    Dumitras, Tudor
    Goldstein, Tom
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31