Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks

被引:1
|
作者
Liu, Qi [1 ]
Liu, Tao [1 ]
Wen, Wujie [1 ]
机构
[1] Florida Int Univ, Miami, FL 33199 USA
来源
CYBER SENSING 2018 | 2018年 / 10630卷
关键词
deep neural network; deep compression; adversarial example; attack and defense;
D O I
10.1117/12.2305226
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting applications such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of-the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely "adversarial examples", causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
  • [2] Adversarial Perturbation Defense on Deep Neural Networks
    Zhang, Xingwei
    Zheng, Xiaolong
    Mao, Wenji
    ACM COMPUTING SURVEYS, 2021, 54 (08)
  • [3] Cocktail Universal Adversarial Attack on Deep Neural Networks
    Li, Shaoxin
    Li, Xiaofeng
    Che, Xin
    Li, Xintong
    Zhang, Yong
    Chu, Lingyang
    COMPUTER VISION - ECCV 2024, PT LXV, 2025, 15123 : 396 - 412
  • [4] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [5] KLAttack: Towards Adversarial Attack and Defense on Neural Dependency Parsing Models
    Luo, Yutao
    Lu, Menghua
    Yang, Chaoqi
    Liu, Gongshen
    Wang, Shilin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [6] Adversarial Attack and Defense in Deep Ranking
    Zhou, Mo
    Wang, Le
    Niu, Zhenxing
    Zhang, Qilin
    Zheng, Nanning
    Hua, Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5306 - 5324
  • [7] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [8] Adversarial Label-Flipping Attack and Defense for Graph Neural Networks
    Zhang, Mengmei
    Hu, Linmei
    Shi, Chuan
    Wang, Xiao
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 791 - 800
  • [9] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions
    Zhao, Pu
    Xu, Kaidi
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505
  • [10] AdvAttackVis: An Adversarial Attack Visualization System for Deep Neural Networks
    Ding Wei-jie
    Shen Xuchen
    Yuan Ying
    Mao Ting-yun
    Sun Guo-dao
    Chen Li-li
    Chen Bing-ting
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (05) : 383 - 391