ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS

被引:2
|
作者
Lim, Hyuntak [1 ]
Roh, Si-Dong [1 ]
Park, Sangki [1 ]
Chung, Ki-Seok [1 ]
机构
[1] Hanyang Univ, Dept Elect Engn, Seoul, South Korea
关键词
Deep Learning; Adversarial Attack; Adversarial Training; Filter Pruning;
D O I
10.1109/MLSP52302.2021.9596121
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today, neural networks show remarkable performance in various computer vision tasks, but they are vulnerable to adversarial attacks. By adversarial training, neural networks may improve robustness against adversarial attacks. However, it is a time-consuming and resource-intensive task. An earlier study analyzed adversarial attacks on the image features and proposed a robust dataset that would contain only features robust to the adversarial attack. By training with the robust dataset, neural networks can achieve a decent accuracy under adversarial attacks without carrying out time-consuming adversarial perturbation tasks. However, even if a network is trained with the robust dataset, it may still be vulnerable to adversarial attacks. In this paper, to overcome this limitation, we propose a new method called Robustness-aware Filter Pruning (RFP). To the best of our knowledge, it is the first attempt to utilize a filter pruning method to enhance the robustness against the adversarial attack. In the proposed method, the filters that are involved with non-robust features are pruned. With the proposed method, 52.1% accuracy against one of the most powerful adversarial attacks is achieved, which is 3.8% better than the previous robust dataset training while maintaining clean image test accuracy. Also, our method achieves the best performance when compared with the other filter pruning methods on robust dataset.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks
    Marchisio, Alberto
    De Marco, Antonio
    Colucci, Alessio
    Martina, Maurizio
    Shafique, Muhammad
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [42] Enhancing robustness of person detection: A universal defense filter against adversarial patch attacks
    Mao, Zimin
    Chen, Shuiyan
    Miao, Zhuang
    Li, Heng
    Xia, Beihao
    Cai, Junzhe
    Yuan, Wei
    You, Xinge
    COMPUTERS & SECURITY, 2024, 146
  • [43] A Dual Robust Graph Neural Network Against Graph Adversarial Attacks
    Tao, Qian
    Liao, Jianpeng
    Zhang, Enze
    Li, Lusi
    NEURAL NETWORKS, 2024, 175
  • [44] Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks
    Zhao, Xin
    Zhang, Zeru
    Zhang, Zijie
    Wu, Lingfei
    Jin, Jiayin
    Zhou, Yang
    Jin, Ruoming
    Dou, Dejing
    Yan, Da
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [45] Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification
    Jiang, Xiaoyu
    Kong, Xiangyin
    Zheng, Junhua
    Ge, Zhiqiang
    Zhang, Xinmin
    Song, Zhihuan
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024,
  • [46] Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
    Wang, Binghui
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1645 - 1653
  • [47] Hardware-Aware Evolutionary Explainable Filter Pruning for Convolutional Neural Networks
    Christian Heidorn
    Muhammad Sabih
    Nicolai Meyerhöfer
    Christian Schinabeck
    Jürgen Teich
    Frank Hannig
    International Journal of Parallel Programming, 2024, 52 : 40 - 58
  • [48] Hardware-Aware Evolutionary Explainable Filter Pruning for Convolutional Neural Networks
    Heidorn, Christian
    Sabih, Muhammad
    Meyerhoefer, Nicolai
    Schinabeck, Christian
    Teich, Juergen
    Hannig, Frank
    INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 2024, 52 (1-2) : 40 - 58
  • [49] ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness
    Theagarajan, Rajkumar
    Chen, Ming
    Bhanu, Bir
    Zhang, Jing
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6981 - 6989
  • [50] Comparison of the Resilience of Convolutional and Cellular Neural Networks Against Adversarial Attacks
    Horvath, Andras
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 2348 - 2352