Pruning Adversarially Robust Neural Networks without Adversarial Examples

被引:1
|
作者
Jian, Tong [1 ]
Wang, Zifeng [1 ]
Wang, Yanzhi [1 ]
Dy, Jennifer [1 ]
Ioannidis, Stratis [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
基金
美国国家科学基金会;
关键词
Adversarial Robustness; Adversarial Pruning; Self-distillation; HSIC Bottleneck;
D O I
10.1109/ICDM54844.2022.00120
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial pruning compresses models while preserving robustness. Current methods require access to adversarial examples during pruning. This significantly hampers training efficiency. Moreover, as new adversarial attacks and training methods develop at a rapid rate, adversarial pruning methods need to be modified accordingly to keep up. In this work, we propose a novel framework to prune a previously trained robust neural network while maintaining adversarial robustness, without further generating adversarial examples. We leverage concurrent self-distillation and pruning to preserve knowledge in the original model as well as regularizing the pruned model via the HilbertSchmidt Information Bottleneck. We comprehensively evaluate our proposed framework and show its superior performance in terms of both adversarial robustness and efficiency when pruning architectures trained on the MNIST, CIFAR-10, and CIFAR-100 datasets against five state-of-the-art attacks..
引用
收藏
页码:993 / 998
页数:6
相关论文
共 50 条
  • [1] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE ACCESS, 2022, 10 : 33602 - 33615
  • [2] HYDRA: Pruning Adversarially Robust Neural Networks
    Sehwag, Vikash
    Wang, Shiqi
    Mittal, Prateek
    Jana, Suman
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] Simplicial-Map Neural Networks Robust to Adversarial Examples
    Paluzo-Hidalgo, Eduardo
    Gonzalez-Diaz, Rocio
    Gutierrez-Naranjo, Miguel A.
    Heras, Jonathan
    [J]. MATHEMATICS, 2021, 9 (02) : 1 - 16
  • [4] Adversarially Robust Neural Architecture Search for Graph Neural Networks
    Xie, Beini
    Chang, Heng
    Zhang, Ziwei
    Wang, Xin
    Wang, Daxin
    Zhang, Zhiqiang
    Ying, Rex
    Zhu, Wenwu
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8143 - 8152
  • [5] ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Lim, Hyuntak
    Roh, Si-Dong
    Park, Sangki
    Chung, Ki-Seok
    [J]. 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [6] Non-Uniform Adversarially Robust Pruning
    Zhao, Qi
    Koenigl, Tim
    Wressnegger, Christian
    [J]. INTERNATIONAL CONFERENCE ON AUTOMATED MACHINE LEARNING, VOL 188, 2022, 188
  • [7] Weight-Covariance Alignment for Adversarially Robust Neural Networks
    Eustratiadis, Panagiotis
    Gouk, Henry
    Li, Da
    Hospedales, Timothy
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] A robust defense for spiking neural networks against adversarial examples via input filtering
    Guo, Shasha
    Wang, Lei
    Yang, Zhijie
    Lu, Yuliang
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 153
  • [9] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    [J]. INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [10] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
    Huang, Hanxun
    Wang, Yisen
    Erfani, Sarah
    Gu, Quanquan
    Bailey, James
    Ma, Xingjun
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34