Pruning Adversarially Robust Neural Networks without Adversarial Examples

被引:1
|
作者
Jian, Tong [1 ]
Wang, Zifeng [1 ]
Wang, Yanzhi [1 ]
Dy, Jennifer [1 ]
Ioannidis, Stratis [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
基金
美国国家科学基金会;
关键词
Adversarial Robustness; Adversarial Pruning; Self-distillation; HSIC Bottleneck;
D O I
10.1109/ICDM54844.2022.00120
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial pruning compresses models while preserving robustness. Current methods require access to adversarial examples during pruning. This significantly hampers training efficiency. Moreover, as new adversarial attacks and training methods develop at a rapid rate, adversarial pruning methods need to be modified accordingly to keep up. In this work, we propose a novel framework to prune a previously trained robust neural network while maintaining adversarial robustness, without further generating adversarial examples. We leverage concurrent self-distillation and pruning to preserve knowledge in the original model as well as regularizing the pruned model via the HilbertSchmidt Information Bottleneck. We comprehensively evaluate our proposed framework and show its superior performance in terms of both adversarial robustness and efficiency when pruning architectures trained on the MNIST, CIFAR-10, and CIFAR-100 datasets against five state-of-the-art attacks..
引用
收藏
页码:993 / 998
页数:6
相关论文
共 50 条
  • [21] Adversarially robust neural style transfer
    Nakano, Reiichiro
    [J]. Distill, 2019, 4 (08):
  • [22] Adversarially Robust Fault Zone Prediction in Smart Grids With Bayesian Neural Networks
    Efatinasab, Emad
    Sinigaglia, Alberto
    Azadi, Nahal
    Antonio Susto, Gian
    Rampazzo, Mirco
    [J]. IEEE ACCESS, 2024, 12 : 121169 - 121184
  • [23] Developing a Robust Defensive System against Adversarial Examples Using Generative Adversarial Networks
    Taheri, Shayan
    Khormali, Aminollah
    Salem, Milad
    Yuan, Jiann-Shiun
    [J]. BIG DATA AND COGNITIVE COMPUTING, 2020, 4 (02) : 1 - 15
  • [24] Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks
    Dbouk, Hassan
    Shanbhag, Naresh R.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [25] Synthesizing Robust Adversarial Examples
    Athalye, Anish
    Engstrom, Logan
    Ilyas, Andrew
    Kwok, Kevin
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [26] Summary of Adversarial Examples Techniques Based on Deep Neural Networks
    Bai, Zhixu
    Wang, Hengjun
    Guo, Kexiang
    [J]. Computer Engineering and Applications, 2024, 57 (23) : 61 - 70
  • [27] Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks
    Barati, Ramin
    Safabakhsh, Reza
    Rahmati, Mohammad
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7036 - 7042
  • [28] Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
    Xu, Weilin
    Evans, David
    Qi, Yanjun
    [J]. 25TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2018), 2018,
  • [29] Explaining Adversarial Examples by Local Properties of Convolutional Neural Networks
    Aghdam, Hamed H.
    Heravi, Elnaz J.
    Puig, Domenec
    [J]. PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 5, 2017, : 226 - 234
  • [30] Generating Adversarial Examples with Adversarial Networks
    Xiao, Chaowei
    Li, Bo
    Zhu, Jun-Yan
    He, Warren
    Liu, Mingyan
    Song, Dawn
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3905 - 3911