Enhancing Adversarial Robustness via Stochastic Robust Framework

被引:0
|
作者
Sun, Zhenjiang [1 ]
Li, Yuanbo [1 ]
Hu, Cong [1 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi, Jiangsu, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Adversarial robustness; Adversarial training; Local winner take all; FACE RECOGNITION;
D O I
10.1007/978-981-99-8462-6_16
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite deep neural networks (DNNs) have attained remarkable success in image classification, the vulnerability of DNNs to adversarial attacks poses significant security risks to their reliability. The design of robust modules in adversarial defense often focuses excessively on individual layers of the model architecture, overlooking the important inter-module facilitation. To this issue, this paper proposes a novel stochastic robust framework that employs the Random Local winner take all module and the random Normalization Aggregation module (RLNA). RLNA designs a random competitive selection mechanism to filter out outputs with high confidence in the classification. This filtering process improves the model's robustness against adversarial attacks. Moreover, we employ a novel balance strategy in adversarial training (AT) to optimize the trade-off between robust accuracy and natural accuracy. Empirical evidence demonstrates that RLNA achieves state-of-the-art robustness accuracy against powerful adversarial attacks on two benchmarking datasets, CIFAR-10 and CIFAR-100. Compared to the method that focuses on individual network layers, RLNA achieves a remarkable 24.78% improvement in robust accuracy on CIFAR-10.
引用
收藏
页码:187 / 198
页数:12
相关论文
共 50 条
  • [1] Enhancing Adversarial Robustness via Anomaly-aware Adversarial Training
    Tang, Keke
    Lou, Tianrui
    He, Xu
    Shi, Yawen
    Zhu, Peican
    Gu, Zhaoquan
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2023, 2023, 14117 : 328 - 342
  • [2] Adversarial robustness via robust low rank representations
    Awasthi, Pranjal
    Jain, Himanshu
    Rawat, Ankit Singh
    Vijayaraghavan, Aravindan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [3] Diversity supporting robustness: Enhancing adversarial robustness via differentiated ensemble predictions
    Chen, Xi
    Huang, Wei
    Peng, Ziwen
    Guo, Wei
    Zhang, Fan
    [J]. COMPUTERS & SECURITY, 2024, 142
  • [4] Enhancing Adversarial Robustness via Score-Based Optimization
    Zhang, Boya
    Luo, Weijian
    Zhang, Zhihua
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
    Li, Guanlin
    Ding, Shuya
    Luo, Jun
    Liu, Chang
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 797 - 805
  • [6] Enhancing adversarial robustness for deep metric learning via neural discrete adversarial training
    Li, Chaofei
    Zhu, Ziyuan
    Niu, Ruicheng
    Zhao, Yuting
    [J]. COMPUTERS & SECURITY, 2024, 143
  • [7] On the Adversarial Robustness of Robust Estimators
    Lai, Lifeng
    Bayraktar, Erhan
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2020, 66 (08) : 5097 - 5109
  • [8] Enhancing Adversarial Robustness via Test-time Transformation Ensembling
    Perez, Juan C.
    Alfarra, Motasem
    Jeanneret, Guillaume
    Rueda, Laura
    Thabet, Ali
    Ghanem, Bernard
    Arbelaez, Pablo
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 81 - 91
  • [9] Assessing and Enhancing Adversarial Robustness of Predictive Analytics: An Empirically Tested Design Framework
    Li, Weifeng
    Chai, Yidong
    [J]. JOURNAL OF MANAGEMENT INFORMATION SYSTEMS, 2022, 39 (02) : 542 - 572
  • [10] Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings
    Kireev, Klim
    Andriushchenko, Maksym
    Troncoso, Carmela
    Flammarion, Nicolas
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,