Targeted Data Poisoning Attacks Against Continual Learning Neural Networks

被引:2
|
作者
Li, Huayu [1 ]
Ditzler, Gregory [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
continual learning; adversarial machine learning; data poisoning attack;
D O I
10.1109/IJCNN55064.2022.9892774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [42] Securing federated learning: a defense strategy against targeted data poisoning attack
    Ansam Khraisat
    Ammar Alazab
    Moutaz Alazab
    Tony Jan
    Sarabjot Singh
    Md. Ashraf Uddin
    Discover Internet of Things, 5 (1):
  • [43] Targeted Clean-Label Poisoning Attacks on Federated Learning
    Patel, Ayushi
    Singh, Priyanka
    RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION, RTIP2R 2022, 2023, 1704 : 231 - 243
  • [44] A Defense Method against Poisoning Attacks on IoT Machine Learning Using Poisonous Data
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    2020 IEEE THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2020), 2020, : 100 - 107
  • [45] A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2021, 15 (02) : 215 - 240
  • [46] Defending Against Poisoning Attacks in Federated Learning with Blockchain
    Dong N.
    Wang Z.
    Sun J.
    Kampffmeyer M.
    Knottenbelt W.
    Xing E.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 13
  • [47] A Novel Continual Learning Approach for Competitive Neural Networks
    Palomo, Esteban J.
    Miguel Ortiz-de-Lazcano-Lobato, Juan
    David Fernandez-Rodriguez, Jose
    Lopez-Rubio, Ezequiel
    Maria Maza-Quiroga, Rosa
    BIO-INSPIRED SYSTEMS AND APPLICATIONS: FROM ROBOTICS TO AMBIENT INTELLIGENCE, PT II, 2022, 13259 : 223 - 232
  • [48] Efficient continual learning in neural networks with embedding regularization
    Pomponi, Jary
    Scardapane, Simone
    Lomonaco, Vincenzo
    Uncini, Aurelio
    NEUROCOMPUTING, 2020, 397 : 139 - 148
  • [49] Data Poisoning Attack against Neural Network-Based On-Device Learning Anomaly Detector by Physical Attacks on Sensors
    Ino, Takahito
    Yoshida, Kota
    Matsutani, Hiroki
    Fujino, Takeshi
    Sensors, 2024, 24 (19)
  • [50] CONTRA: Defending Against Poisoning Attacks in Federated Learning
    Awan, Sana
    Luo, Bo
    Li, Fengjun
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 455 - 475