Backdoor Attacks Against Deep Learning Systems in the Physical World

被引:90
|
作者
Wenger, Emily [1 ]
Passananti, Josephine [1 ]
Bhagoji, Arjun Nitin [1 ]
Yao, Yuanshun [1 ]
Zheng, Haitao [1 ]
Zhao, Ben Y. [1 ]
机构
[1] Univ Chicago, Dept Comp Sci, Chicago, IL 60637 USA
关键词
D O I
10.1109/CVPR46437.2021.00614
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Backdoor attacks embed hidden malicious behaviors into deep learning models, which only activate and cause misclassifications on model inputs containing a specific "trigger." Existing works on backdoor attacks and defenses, however, mostly focus on digital attacks that apply digitally generated patterns as triggers. A critical question remains unanswered: "can backdoor attacks succeed using physical objects as triggers, thus making them a credible threat against deep learning systems in the real world?" We conduct a detailed empirical study to explore this question for facial recognition, a critical deep learning task. Using 7 physical objects as triggers, we collect a custom dataset of 3205 images of 10 volunteers and use it to study the feasibility of "physical" backdoor attacks under a variety of real-world conditions. Our study reveals two key findings. First, physical backdoor attacks can be highly successful if they are carefully configured to overcome the constraints imposed by physical objects. In particular, the placement of successful triggers is largely constrained by the target model's dependence on key facial features. Second, four of today's state-of-the-art defenses against (digital) backdoors are ineffective against physical backdoors, because the use of physical objects breaks core assumptions used to construct these defenses. Our study confirms that (physical) backdoor attacks are not a hypothetical phenomenon but rather pose a serious real-world threat to critical classification tasks. We need new and more robust defenses against backdoors in the physical world.
引用
收藏
页码:6202 / 6211
页数:10
相关论文
共 50 条
  • [1] Backdoor Attacks against Learning Systems
    Ji, Yujie
    Zhang, Xinyang
    Wang, Ting
    2017 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2017, : 191 - 199
  • [2] Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
    Xue, Mingfu
    He, Can
    Sun, Shichang
    Wang, Jian
    Liu, Weiqiang
    2021 IEEE 20TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2021), 2021, : 620 - 626
  • [3] PTB: Robust physical backdoor attacks against deep neural networks in real world
    Xue, Mingfu
    He, Can
    Wu, Yinghao
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    COMPUTERS & SECURITY, 2022, 118
  • [4] Backdoor attacks against deep reinforcement learning based traffic signal control systems
    Heng Zhang
    Jun Gu
    Zhikun Zhang
    Linkang Du
    Yongmin Zhang
    Yan Ren
    Jian Zhang
    Hongran Li
    Peer-to-Peer Networking and Applications, 2023, 16 : 466 - 474
  • [5] Backdoor attacks against deep reinforcement learning based traffic signal control systems
    Zhang, Heng
    Gu, Jun
    Zhang, Zhikun
    Du, Linkang
    Zhang, Yongmin
    Ren, Yan
    Zhang, Jian
    Li, Hongran
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2023, 16 (01) : 466 - 474
  • [6] Towards Physical World Backdoor Attacks Against Skeleton Action Recognition
    Zheng, Qichen
    Yu, Yi
    Yang, Siyuan
    Liu, Jun
    Lam, Kwok-Yan
    Kot, Alex
    COMPUTER VISION - ECCV 2024, PT XLVIII, 2025, 15106 : 215 - 233
  • [7] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [8] Physical Backdoor: Towards Temperature-based Backdoor Attacks in the Physical World
    Yin, Wen
    Lou, Jian
    Zhou, Pan
    Xie, Yulai
    Feng, Dan
    Sun, Yuhua
    Zhang, Tailai
    Sun, Lichao
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 12733 - 12743
  • [9] Kaleidoscope: Physical Backdoor Attacks Against Deep Neural Networks With RGB Filters
    Gong, Xueluan
    Wang, Ziyao
    Chen, Yanjiao
    Xue, Meng
    Wang, Qian
    Shen, Chao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 4993 - 5004
  • [10] Backdoor Attacks Against Transfer Learning With Pre-Trained Deep Learning Models
    Wang, Shuo
    Nepal, Surya
    Rudolph, Carsten
    Grobler, Marthie
    Chen, Shangyu
    Chen, Tianle
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2022, 15 (03) : 1526 - 1539