Data Poisoning Quantization Backdoor Attack

被引:0
|
作者
Tran Huynh [1 ]
Anh Tran [1 ]
Khoa D Doan [2 ]
Tung Pham [1 ]
机构
[1] VinAI Res, Hanoi, Vietnam
[2] Vin Univ, Coll Engn & Comp Sci, Hanoi, Vietnam
来源
关键词
Backdoor attacks; Data poisoning; Quantization backdoor;
D O I
10.1007/978-3-031-72907-2_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning (DL) models are often large and require a lot of computing power. Hence, model quantization is frequently used to reduce their size and complexity, making them more suitable for deployment on edge devices or achieving real-time performance. It has been previously shown that standard quantization frameworks can be exploited to activate the backdoor in a DL model. This means that an attacker could create a hijacked model that appears normal and free from backdoors (even when examined by state-of-the-art defenses), but when it is quantized, the back-door is activated, and the attacker can control the model's output. Existing backdoor attack methods on quantization models require full access to the victim model, which might not hold in practice. In this work, we focus on designing a novel quantization backdoor based on data poisoning, which requires zero knowledge of the target model. The key component is a trigger pattern generator, which is trained together with a surrogate model in an alternating manner. The attack's effectiveness is tested on multiple benchmark datasets, including CIFAR10, CelebA, and ImageNet10, as well as state-of-the-art backdoor defenses.
引用
收藏
页码:38 / 54
页数:17
相关论文
共 50 条
  • [1] Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving
    Pourkeshavarz, Mozhgan
    Sabokrou, Mohammad
    Rasouli, Amir
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 14885 - 14894
  • [2] Chronic Poisoning: Backdoor Attack against Split Learning
    Yu, Fangchao
    Zeng, Bo
    Zhao, Kai
    Pang, Zhi
    Wang, Lina
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16531 - 16538
  • [3] Debiasing backdoor attack: A benign application of backdoor attack in eliminating data bias
    Wu, Shangxi
    He, Qiuyang
    Zhang, Yi
    Lu, Dongyuan
    Sang, Jitao
    INFORMATION SCIENCES, 2023, 643
  • [4] Color Backdoor: A Robust Poisoning Attack in Color Space
    Jiang, Wenbo
    Li, Hongwei
    Xu, Guowen
    Zhang, Tianwei
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8133 - 8142
  • [5] Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Liu, Jiqiang
    Zhang, Xiangliang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 9020 - 9028
  • [6] Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
    Schwarzschild, Avi
    Goldblum, Micah
    Gupta, Arjun
    Dickerson, John P.
    Goldstein, Tom
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [7] Data Poisoning and Backdoor Attacks on Audio Intelligence Systems
    Ge, Yunjie
    Wang, Qian
    Yu, Jiayuan
    Shen, Chao
    Li, Qi
    IEEE COMMUNICATIONS MAGAZINE, 2023, 61 (12) : 176 - 182
  • [8] A NEW BACKDOOR ATTACK IN CNNS BY TRAINING SET CORRUPTION WITHOUT LABEL POISONING
    Barni, M.
    Kallas, K.
    Tondi, B.
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 101 - 105
  • [9] Data Poisoning based Backdoor Attacks to Contrastive Learning
    Zhang, Jinghuai
    Liu, Hongbin
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 24357 - 24366
  • [10] Low-Poisoning Rate Invisible Backdoor Attack Based on Important Neurons
    Yang, Xiu-Gui
    Qian, Xiang-Yun
    Zhang, Rui
    Huang, Ning
    Xia, Hui
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS (WASA 2022), PT II, 2022, 13472 : 375 - 383