Data Poisoning Quantization Backdoor Attack

被引:0
|
作者
Tran Huynh [1 ]
Anh Tran [1 ]
Khoa D Doan [2 ]
Tung Pham [1 ]
机构
[1] VinAI Res, Hanoi, Vietnam
[2] Vin Univ, Coll Engn & Comp Sci, Hanoi, Vietnam
来源
关键词
Backdoor attacks; Data poisoning; Quantization backdoor;
D O I
10.1007/978-3-031-72907-2_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning (DL) models are often large and require a lot of computing power. Hence, model quantization is frequently used to reduce their size and complexity, making them more suitable for deployment on edge devices or achieving real-time performance. It has been previously shown that standard quantization frameworks can be exploited to activate the backdoor in a DL model. This means that an attacker could create a hijacked model that appears normal and free from backdoors (even when examined by state-of-the-art defenses), but when it is quantized, the back-door is activated, and the attacker can control the model's output. Existing backdoor attack methods on quantization models require full access to the victim model, which might not hold in practice. In this work, we focus on designing a novel quantization backdoor based on data poisoning, which requires zero knowledge of the target model. The key component is a trigger pattern generator, which is trained together with a surrogate model in an alternating manner. The attack's effectiveness is tested on multiple benchmark datasets, including CIFAR10, CelebA, and ImageNet10, as well as state-of-the-art backdoor defenses.
引用
收藏
页码:38 / 54
页数:17
相关论文
共 50 条
  • [21] DBIA: DATA-FREE BACKDOOR ATTACK AGAINST TRANSFORMER NETWORKS
    Lv, Peizhuo
    Ma, Hualong
    Zhou, Jiachen
    Liang, Ruigang
    Chen, Kai
    Zhang, Shengzhi
    Yang, Yunfei
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2819 - 2824
  • [22] SIMTROJAN: STEALTHY BACKDOOR ATTACK
    Ren, Yankun
    Li, Longfei
    Zhou, Jun
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 819 - 823
  • [23] Inconspicuous Data Augmentation Based Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 237 - 242
  • [24] TRGE: A Backdoor Detection After Quantization
    Xie, Renhua
    Fang, Xuxin
    Ma, Bo
    Li, Chuanhuang
    Yuan, Xiaoyong
    INFORMATION SECURITY AND CRYPTOLOGY, INSCRYPT 2023, PT II, 2024, 14527 : 394 - 398
  • [25] The Design and Development of a Game to Study Backdoor Poisoning Attacks: The Backdoor Game
    Ashktorab, Zahra
    Dugan, Casey
    Johnson, James
    Sharma, Aabhas
    Torres, Dustin Ramsey
    Lange, Ingrid
    Hoover, Benjamin
    Ludwig, Heiko
    Chen, Bryant
    Baracaldo, Nathalie
    Geyer, Werner
    Pan, Qian
    IUI '21 - 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2021, : 423 - 433
  • [26] Backdoor learning curves: explaining backdoor poisoning beyond influence functions
    Cina, Antonio Emanuele
    Grosse, Kathrin
    Vascon, Sebastiano
    Demontis, Ambra
    Biggio, Battista
    Roli, Fabio
    Pelillo, Marcello
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (03) : 1779 - 1804
  • [27] CCBA: Code Poisoning-Based Clean-Label Covert Backdoor Attack Against DNNs
    Yang, Xubo
    Li, Linsen
    Hua, Cunqing
    Yao, Changhao
    DIGITAL FORENSICS AND CYBER CRIME, PT 1, ICDF2C 2023, 2024, 570 : 179 - 192
  • [28] STRONG DATA AUGMENTATION SANITIZES POISONING AND BACKDOOR ATTACKS WITHOUT AN ACCURACY TRADEOFF
    Borgnia, Eitan
    Cherepanova, Valeriia
    Fowl, Liam
    Ghiasi, Amin
    Geiping, Jonas
    Goldblum, Micah
    Goldstein, Tom
    Gupta, Arjun
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3855 - 3859
  • [29] Attack under Disguise: An Intelligent Data Poisoning Attack Mechanism in Crowdsourcing
    Miao, Chenglin
    Li, Qi
    Su, Lu
    Huai, Mengdi
    Jiang, Wenjun
    Gao, Jing
    WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, : 13 - 22
  • [30] STEALTHY BACKDOOR ATTACK WITH ADVERSARIAL TRAINING
    Feng, Le
    Li, Sheng
    Qian, Zhenxing
    Zhang, Xinpeng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2969 - 2973