Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks

被引:1
|
作者
Liu, Qi [1 ]
Liu, Tao [1 ]
Wen, Wujie [1 ]
机构
[1] Florida Int Univ, Miami, FL 33199 USA
来源
CYBER SENSING 2018 | 2018年 / 10630卷
关键词
deep neural network; deep compression; adversarial example; attack and defense;
D O I
10.1117/12.2305226
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting applications such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of-the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely "adversarial examples", causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] EFFICIENT RANDOMIZED DEFENSE AGAINST ADVERSARIAL ATTACKS IN DEEP CONVOLUTIONAL NEURAL NETWORKS
    Sheikholeslami, Fatemeh
    Jain, Swayambhoo
    Giannakis, Georgios B.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3277 - 3281
  • [32] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [33] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [34] HYBRID DEFENSE FOR DEEP NEURAL NETWORKS: AN INTEGRATION OF DETECTING AND CLEANING ADVERSARIAL PERTURBATIONS
    Fan, Weiqi
    Sun, Guangling
    Su, Yuying
    Liu, Zhi
    Lu, Xiaofeng
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 210 - 215
  • [35] Adversarial Attack and Defense on Deep Neural Network-Based Voice Processing Systems: An Overview
    Chen, Xiaojiao
    Li, Sheng
    Huang, Hao
    APPLIED SCIENCES-BASEL, 2021, 11 (18):
  • [36] CSTAR: Towards Compact and Structured Deep Neural Networks with Adversarial Robustness
    Phan, Huy
    Yin, Miao
    Sui, Yang
    Yuan, Bo
    Zonouz, Saman
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 2065 - 2073
  • [37] Towards Robust Ensemble Defense Against Adversarial Examples Attack
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [38] Conditional Generative Adversarial Networks with Adversarial Attack and Defense for Generative Data Augmentation
    Baek, Francis
    Kim, Daeho
    Park, Somin
    Kim, Hyoungkwan
    Lee, SangHyun
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2022, 36 (03)
  • [39] Adversarial attack defense algorithm based on convolutional neural network
    Zhang, Chengyuan
    Wang, Ping
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (17): : 9723 - 9735
  • [40] Adversarial Attack and Defense in Breast Cancer Deep Learning Systems
    Li, Yang
    Liu, Shaoying
    BIOENGINEERING-BASEL, 2023, 10 (08):