Targeted Data Poisoning Attacks Against Continual Learning Neural Networks

被引:2
|
作者
Li, Huayu [1 ]
Ditzler, Gregory [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
continual learning; adversarial machine learning; data poisoning attack;
D O I
10.1109/IJCNN55064.2022.9892774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Continual Learning with Sparse Progressive Neural Networks
    Ergun, Esra
    Toreyin, Behcet Ugur
    2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2020,
  • [32] A Federated Learning Framework against Data Poisoning Attacks on the Basis of the Genetic Algorithm
    Zhai, Ran
    Chen, Xuebin
    Pei, Langtao
    Ma, Zheng
    ELECTRONICS, 2023, 12 (03)
  • [33] CCF Based System Framework In Federated Learning Against Data Poisoning Attacks
    Ahmed, Ibrahim M.
    Kashmoola, Manar Younis
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2023, 26 (07): : 973 - 981
  • [34] Continual lifelong learning with neural networks: A review
    Parisi, German I.
    Kemker, Ronald
    Part, Jose L.
    Kanan, Christopher
    Wermter, Stefan
    NEURAL NETWORKS, 2019, 113 : 54 - 71
  • [35] Employing Convolutional Neural Networks for Continual Learning
    Jasinski, Marcin
    Wozniak, Michal
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2022, PT I, 2023, 13588 : 288 - 297
  • [36] Data Poisoning Attacks against Autoregressive Models
    Alfeld, Scott
    Zhu, Xiaojin
    Barford, Paul
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1452 - 1458
  • [37] Towards multi-party targeted model poisoning attacks against federated learning systems
    Chen, Zheyi
    Tian, Pu
    Liao, Weixian
    Yu, Wei
    HIGH-CONFIDENCE COMPUTING, 2021, 1 (01):
  • [38] Centered-Ranking Learning Against Adversarial Attacks in Neural Networks
    Appiah, Benjamin
    Adu, Adolph S. Y.
    Osei, Isaac
    Assamah, Gabriel
    Hammond, Ebenezer N. A.
    International Journal of Network Security, 2023, 25 (05) : 814 - 820
  • [39] Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
    Wang, Jialai
    Zhang, Ziyuan
    Wang, Meiqi
    Qiu, Han
    Zhang, Tianwei
    Li, Qi
    Li, Zongpeng
    Wei, Tao
    Zhang, Chao
    PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2329 - 2346
  • [40] Decentralized Learning Robust to Data Poisoning Attacks
    Mao, Yanwen
    Data, Deepesh
    Diggavi, Suhas
    Tabuada, Paulo
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 6788 - 6793