Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks

被引:1
|
作者
Liu, Qi [1 ]
Liu, Tao [1 ]
Wen, Wujie [1 ]
机构
[1] Florida Int Univ, Miami, FL 33199 USA
来源
CYBER SENSING 2018 | 2018年 / 10630卷
关键词
deep neural network; deep compression; adversarial example; attack and defense;
D O I
10.1117/12.2305226
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting applications such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of-the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely "adversarial examples", causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Adversarial attack and defense strategies for deep speaker recognition systems
    Jati, Arindam
    Hsu, Chin-Cheng
    Pal, Monisankha
    Peri, Raghuveer
    AbdAlmageed, Wael
    Narayanan, Shrikanth
    COMPUTER SPEECH AND LANGUAGE, 2021, 68
  • [42] Adversarial Examples for Graph Data: Deep Insights into Attack and Defense
    Wu, Huijun
    Wang, Chen
    Tyshetskiy, Yuriy
    Docherty, Andrew
    Lu, Kai
    Zhu, Liming
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4816 - 4823
  • [43] Adversarial Attack Defense Based on the Deep Image Prior Network
    Sutanto, Richard Evan
    Lee, Sukho
    INFORMATION SCIENCE AND APPLICATIONS, 2020, 621 : 519 - 526
  • [44] Analyze textual data: deep neural network for adversarial inversion attack in wireless networks
    Mohammed A. Al Ghamdi
    SN Applied Sciences, 2023, 5
  • [45] Blind Data Adversarial Bit-flip Attack against Deep Neural Networks
    Ghavami, Behnam
    Sadati, Mani
    Shahidzadeh, Mohammad
    Fang, Zhenman
    Shannon, Lesley
    2022 25TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD), 2022, : 899 - 904
  • [46] ADVERSPARSE: AN ADVERSARIAL ATTACK FRAMEWORK FOR DEEP SPATIAL-TEMPORAL GRAPH NEURAL NETWORKS
    Li, Jiayu
    Zhang, Tianyun
    Jin, Shengmin
    Fardad, Makan
    Zafarani, Reza
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 5857 - 5861
  • [47] Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
    Hirose, Yudai
    Ono, Satoshi
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [48] A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples
    Zheng, Junhao
    Chan, Patrick P. K.
    Chi, Huiyang
    He, Zhimin
    INFORMATION SCIENCES, 2022, 615 : 758 - 773
  • [49] Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks
    Aggarwal, Swati
    Mittal, Anshul
    Aggarwal, Sanchit
    Singh, Anshul Kumar
    AI, 2024, 5 (03) : 1216 - 1234
  • [50] Analyze textual data: deep neural network for adversarial inversion attack in wireless networks
    Al Ghamdi, Mohammed A.
    SN APPLIED SCIENCES, 2023, 5 (12):