Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks

被引:1
|
作者
Liu, Qi [1 ]
Liu, Tao [1 ]
Wen, Wujie [1 ]
机构
[1] Florida Int Univ, Miami, FL 33199 USA
来源
CYBER SENSING 2018 | 2018年 / 10630卷
关键词
deep neural network; deep compression; adversarial example; attack and defense;
D O I
10.1117/12.2305226
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting applications such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of-the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely "adversarial examples", causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Comparing Speed Reduction of Adversarial Defense Systems on Deep Neural Networks
    Bowman, Andrew
    Yang, Xin
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST), 2021,
  • [22] Towards the Development of Robust Deep Neural Networks in Adversarial Settings
    Huster, Todd P.
    Chiang, Cho-Yu Jason
    Chadha, Ritu
    Swami, Ananthram
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 419 - 424
  • [23] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [24] Defending Against Adversarial Attack Towards Deep Neural Networks Via Collaborative Multi-Task Training
    Wang, Derui
    Li, Chaoran
    Wen, Sheng
    Nepal, Surya
    Xiang, Yang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (02) : 953 - 965
  • [25] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    PATTERN RECOGNITION, 2022, 131
  • [26] Query efficient black-box adversarial attack on deep neural networks
    Bai, Yang
    Wang, Yisen
    Zeng, Yuyuan
    Jiang, Yong
    Xia, Shu-Tao
    PATTERN RECOGNITION, 2023, 133
  • [27] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (03) : 1474 - 1488
  • [28] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    IEEE Transactions on Dependable and Secure Computing, 2021, 18 (03): : 1474 - 1488
  • [29] AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack
    Kwon, Hyun
    Lee, Jun
    IEEE ACCESS, 2024, 12 : 5345 - 5356
  • [30] Generative Adversarial Networks: A Survey on Attack and Defense Perspective
    Zhang, Chenhan
    Yu, Shui
    Tian, Zhiyi
    Yu, James J. Q.
    ACM COMPUTING SURVEYS, 2024, 56 (04)