Manifold-driven decomposition for adversarial robustness

被引:0
|
作者
Zhang, Wenjia [1 ]
Zhang, Yikai [2 ]
Hu, Xiaoling [3 ]
Yao, Yi [4 ]
Goswami, Mayank [5 ]
Chen, Chao [6 ]
Metaxas, Dimitris [1 ]
机构
[1] Rutgers State Univ, Dept Comp Sci, Piscataway, NJ 08854 USA
[2] Morgan Stanley, New York, NY USA
[3] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY USA
[4] SRI Int, Comp Vis Lab, Princeton, NJ USA
[5] CUNY, Dept Comp Sci, Queens Coll, New York, NY USA
[6] SUNY Stony Brook, Dept Biomed Informat, Stony Brook, NY 11794 USA
来源
基金
美国国家科学基金会;
关键词
robustness; adversarial attack; manifold; topological analysis of network; generalization;
D O I
10.3389/fcomp.2023.1274695
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The adversarial risk of a machine learning model has been widely studied. Most previous studies assume that the data lie in the whole ambient space. We propose to take a new angle and take the manifold assumption into consideration. Assuming data lie in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction and the in-manifold adversarial risk due to perturbation within the manifold. We prove that the classic adversarial risk can be bounded from both sides using the normal and in-manifold adversarial risks. We also show a surprisingly pessimistic case that the standard adversarial risk can be non-zero even when both normal and in-manifold adversarial risks are zero. We finalize the study with empirical studies supporting our theoretical results. Our results suggest the possibility of improving the robustness of a classifier without sacrificing model accuracy, by only focusing on the normal adversarial risk.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Enhancing Adversarial Robustness through Stable Adversarial Training
    Yan, Kun
    Yang, Luyi
    Yang, Zhanpeng
    Ren, Wenjuan
    SYMMETRY-BASEL, 2024, 16 (10):
  • [32] On the Adversarial Robustness of Mixture of Experts
    Puigcerver, Joan
    Jenatton, Rodolphe
    Riquelme, Carlos
    Awasthi, Pranjal
    Bhojanapalli, Srinadh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [33] On the Adversarial Robustness of Hypothesis Testing
    Jin, Yulu
    Lai, Lifeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 515 - 530
  • [34] Explainability and Adversarial Robustness for RNNs
    Hartl, Alexander
    Bachl, Maximilian
    Fabini, Joachim
    Zseby, Tanja
    2020 IEEE SIXTH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING SERVICE AND APPLICATIONS (BIGDATASERVICE 2020), 2020, : 149 - 157
  • [35] Disentangling Adversarial Robustness and Generalization
    Stutz, David
    Hein, Matthias
    Schiele, Bernt
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6969 - 6980
  • [36] On the Effect of Pruning on Adversarial Robustness
    Jordao, Artur
    Pedrini, Helio
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 1 - 11
  • [37] Stratified Adversarial Robustness with Rejection
    Chen, Jiefeng
    Raghuram, Jayaram
    Choi, Jihye
    Wu, Xi
    Liang, Yingyu
    Jha, Somesh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [38] On the adversarial robustness of aerial detection
    Chen, Yuwei
    Chu, Shiyong
    FRONTIERS IN COMPUTER SCIENCE, 2024, 6
  • [39] Sliced Wasserstein adversarial training for improving adversarial robustness
    Lee W.
    Lee S.
    Kim H.
    Lee J.
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (08) : 3229 - 3242
  • [40] On the Adversarial Robustness of Subspace Learning
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 1470 - 1483