Targeted Data Poisoning Attacks Against Continual Learning Neural Networks

被引:2
|
作者
Li, Huayu [1 ]
Ditzler, Gregory [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
continual learning; adversarial machine learning; data poisoning attack;
D O I
10.1109/IJCNN55064.2022.9892774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Continual learning with attentive recurrent neural networks for temporal data classification
    Yin, Shao-Yu
    Huang, Yu
    Chang, Tien-Yu
    Chang, Shih-Fang
    Tseng, Vincent S.
    NEURAL NETWORKS, 2023, 158 : 171 - 187
  • [22] Class-Targeted Poisoning Attacks against DNNs
    Chen, Jian
    Wu, Jingyao
    Yin, Hao
    Li, Qiang
    Zhang, Wensheng
    Wang, Chen
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 20 - 27
  • [23] Stronger Targeted Poisoning Attacks Against Malware Detection
    Narisada, Shintaro
    Sasaki, Shoichiro
    Hidano, Seira
    Uchibayashi, Toshihiro
    Suganuma, Takuo
    Hiji, Masahiro
    Kiyomoto, Shinsaku
    CRYPTOLOGY AND NETWORK SECURITY, CANS 2020, 2020, 12579 : 65 - 84
  • [24] Robust Learning for Data Poisoning Attacks
    Wang, Yunjuan
    Mianjy, Poorya
    Arora, Raman
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7872 - 7881
  • [25] Data Poisoning Attacks on Crowdsourcing Learning
    Chen, Pengpeng
    Sun, Hailong
    Chen, Zhijun
    WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 : 164 - 179
  • [26] DATA POISONING ATTACK AIMING THE VULNERABILITY OF CONTINUAL LEARNING
    Han, Gyojin
    Choi, Jaehyun
    Hong, Hyeong Gwon
    Kim, Junmo
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1905 - 1909
  • [27] Data Poisoning Attacks in Gossip Learning
    Pham, Alexandre
    Potop-Butucaru, Maria
    Tixeuil, Sebastien
    Fdida, Serge
    ADVANCED INFORMATION NETWORKING AND APPLICATIONS, VOL 2, AINA 2024, 2024, 200 : 213 - 224
  • [28] Continual Learning Using Bayesian Neural Networks
    Li, Honglin
    Barnaghi, Payam
    Enshaeifare, Shirin
    Ganz, Frieder
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (09) : 4243 - 4252
  • [29] Continual robot learning with constructive neural networks
    Grossmann, A
    Poli, R
    LEARNING ROBOTS, PROCEEDINGS, 1998, 1545 : 95 - 108
  • [30] Sparse Progressive Neural Networks for Continual Learning
    Ergun, Esra
    Toreyin, Behcet Ugur
    ADVANCES IN COMPUTATIONAL COLLECTIVE INTELLIGENCE (ICCCI 2021), 2021, 1463 : 715 - 725