Backdooring Convolutional Neural Networks via TargetedWeight Perturbations

被引:0
|
作者
Dumford, Jacob [1 ]
Scheirer, Walter [1 ]
机构
[1] Univ Notre Dame, Notre Dame, IN 46556 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a new white-box backdoor attack that exploits a vulnerability of convolutional neural networks (CNNs). In particular, we examine the application of facial recognition. Deep learning techniques are at the top of the game for facial recognition, which means they have now been implemented in many production-level systems. Alarmingly, unlike other commercial technologies such as operating systems and network devices, deep learning-based facial recognition algorithms are not presently designed with security requirements or audited for security vulnerabilities before deployment. Given how young the technology is and how abstract many of the internal workings of these algorithms are, neural network-based facial recognition systems are prime targets for security breaches. As more and more of our personal information begins to be guarded by facial recognition (e.g., the iPhone X), exploring the security vulnerabilities of these systems from a penetration testing standpoint is crucial. Along these lines, we describe a general methodology for backdooring CNNs via targeted weight perturbations. Using a five-layer CNN and ResNet-50 as case studies, we show that an attacker is able to significantly increase the chance that inputs they supply will be falsely accepted by a CNN while simultaneously preserving the error rates for legitimate enrolled classes.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Stability of graph convolutional neural networks to stochastic perturbations
    Gao, Zhan
    Isufi, Elvin
    Ribeiro, Alejandro
    SIGNAL PROCESSING, 2021, 188
  • [2] BadNets: Evaluating Backdooring Attacks on Deep Neural Networks
    Gu, Tianyu
    Liu, Kang
    Dolan-Gavitt, Brendan
    Garg, Siddharth
    IEEE ACCESS, 2019, 7 : 47230 - 47244
  • [3] Spherical convolutional neural networks: Stability to perturbations in SO(3)
    Gao, Zhan
    Gama, Fernando
    Ribeiro, Alejandro
    SIGNAL PROCESSING, 2022, 196
  • [4] Compressing Convolutional Neural Networks via Factorized Convolutional Filters
    Li, Tuanhui
    Wu, Baoyuan
    Yang, Yujiu
    Fan, Yanbo
    Zhang, Yong
    Liu, Wei
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3972 - 3981
  • [5] Convolutional Neural Networks Analyzed via Convolutional Sparse Coding
    Papyan, Vardan
    Romano, Yaniv
    Elad, Michael
    JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18 : 1 - 52
  • [6] Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
    Quiring, Erwin
    Rieck, Konrad
    2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2020), 2020, : 41 - 47
  • [7] Pansharpening via Unsupervised Convolutional Neural Networks
    Luo, Shuyue
    Zhou, Shangbo
    Feng, Yong
    Xie, Jiangan
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2020, 13 : 4295 - 4310
  • [8] Graph convolutional neural networks via scattering
    Zou, Dongmian
    Lerman, Gilad
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2020, 49 (03) : 1046 - 1074
  • [9] Guided Perturbations: Self-corrective Behavior in Convolutional Neural Networks
    Sankaranarayanan, Swami
    Jain, Arpit
    Lim, Ser Nam
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3582 - 3590
  • [10] Classification and Location of Neutron Noise Perturbations Using Convolutional Neural Networks
    Chillaron, Monica
    Vidal-Ferrandiz, Antoni
    Vidal, Vicente
    Verdu, Gumersindo
    NUCLEAR SCIENCE AND ENGINEERING, 2024,