Imperceptible and Robust Backdoor Attack in 3D Point Cloud

被引:2
|
作者
Gao, Kuofeng [1 ]
Bai, Jiawang [1 ]
Wu, Baoyuan [2 ]
Ya, Mengxi [1 ]
Xia, Shu-Tao [1 ,3 ]
机构
[1] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Shenzhen 518055, Guangdong, Peoples R China
[2] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen CUHK Shenzhen, Shenzhen 518172, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518055, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Backdoor attack; weighted local transformation; 3D point cloud;
D O I
10.1109/TIFS.2023.3333687
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
With the thriving of deep learning in processing point cloud data, recent works show that backdoor attacks pose a severe security threat to 3D vision applications. The attacker injects the backdoor into the 3D model by poisoning a few training samples with trigger, such that the backdoored model performs well on clean samples but behaves maliciously when the trigger pattern appears. Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e.g., rotation) to construct the poisoned point cloud. However, the effects of these poisoned samples are likely to be weakened or even eliminated by some commonly used pre-processing techniques for 3D point cloud, e.g., outlier removal or rotation augmentation. In this paper, we propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge. We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations. As there are several hyper-parameters and randomness in WLT, it is difficult to produce two similar transformations. Consequently, poisoned samples with unique transformations are likely to be resistant to aforementioned pre-processing techniques. Besides, the distortion caused by a fixed WLT is both controllable and smooth, resulting in the generated poisoned samples that are imperceptible to human inspection. Extensive experiments on three benchmark datasets and four models show that IRBA achieves 80%+ attack success rate (ASR) in most cases even with pre-processing techniques, which is significantly higher than previous state-of-the-art attacks. Our code is available at https://github.com/KuofengGao/IRBA.
引用
收藏
页码:1267 / 1282
页数:16
相关论文
共 50 条
  • [1] A Backdoor Attack against 3D Point Cloud Classifiers
    Xiang, Zhen
    Miller, David J.
    Chen, Siheng
    Li, Xi
    Kesidis, George
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7577 - 7587
  • [2] Imperceptible Transfer Attack and Defense on 3D Point Cloud Classification
    Liu, Daizong
    Hu, Wei
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4727 - 4746
  • [3] iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself
    Bian, Yuhao
    Tian, Shengjing
    Liu, Xiuping
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 7994 - 8008
  • [4] PointBA: Towards Backdoor Attacks in 3D Point Cloud
    Li, Xinke
    Chen, Zhirui
    Zhao, Yue
    Tong, Zekun
    Zhao, Yabang
    Lim, Andrew
    Zhou, Joey Tianyi
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16472 - 16481
  • [5] Work-in-Progress: A Physically Realizable Backdoor Attack on 3D Point Cloud Deep Learning
    Bian, Chen
    Jiang, Wei
    Zhan, Jinyu
    Song, Ziwei
    Wen, Xiangyu
    Lei, Hong
    [J]. 2021 INTERNATIONAL CONFERENCE ON HARDWARE/SOFTWARE CODESIGN AND SYSTEM SYNTHESIS (CODES+ISSS 2021), 2021, : 27 - 28
  • [6] RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN
    Phan, Huy
    Shi, Cong
    Xie, Yi
    Zhang, Tianfang
    Li, Zhuohang
    Zhao, Tianming
    Liu, Jian
    Wang, Yan
    Chen, Yingying
    Yuan, Bo
    [J]. COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 708 - 724
  • [7] PointCRT: Detecting Backdoor in 3D Point Cloud via Corruption Robustness
    Hu, Shengshan
    Liu, Wei
    Li, Minghui
    Zhang, Yechao
    Liu, Xiaogeng
    Wang, Xianlong
    Zhang, Leo Yu
    Hou, Junhui
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 666 - 675
  • [8] Imperceptible and multi-channel backdoor attack
    Xue, Mingfu
    Ni, Shifeng
    Wu, Yinghao
    Zhang, Yushu
    Liu, Weiqiang
    [J]. APPLIED INTELLIGENCE, 2024, 54 (01) : 1099 - 1116
  • [9] Imperceptible and multi-channel backdoor attack
    Mingfu Xue
    Shifeng Ni
    Yinghao Wu
    Yushu Zhang
    Weiqiang Liu
    [J]. Applied Intelligence, 2024, 54 : 1099 - 1116
  • [10] Backdoor Attack with Imperceptible Input and Latent Modification
    Khoa Doan
    Lao, Yingjie
    Li, Ping
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34