Manifold-driven decomposition for adversarial robustness

被引:0
|
作者
Zhang, Wenjia [1 ]
Zhang, Yikai [2 ]
Hu, Xiaoling [3 ]
Yao, Yi [4 ]
Goswami, Mayank [5 ]
Chen, Chao [6 ]
Metaxas, Dimitris [1 ]
机构
[1] Rutgers State Univ, Dept Comp Sci, Piscataway, NJ 08854 USA
[2] Morgan Stanley, New York, NY USA
[3] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY USA
[4] SRI Int, Comp Vis Lab, Princeton, NJ USA
[5] CUNY, Dept Comp Sci, Queens Coll, New York, NY USA
[6] SUNY Stony Brook, Dept Biomed Informat, Stony Brook, NY 11794 USA
来源
基金
美国国家科学基金会;
关键词
robustness; adversarial attack; manifold; topological analysis of network; generalization;
D O I
10.3389/fcomp.2023.1274695
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The adversarial risk of a machine learning model has been widely studied. Most previous studies assume that the data lie in the whole ambient space. We propose to take a new angle and take the manifold assumption into consideration. Assuming data lie in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction and the in-manifold adversarial risk due to perturbation within the manifold. We prove that the classic adversarial risk can be bounded from both sides using the normal and in-manifold adversarial risks. We also show a surprisingly pessimistic case that the standard adversarial risk can be non-zero even when both normal and in-manifold adversarial risks are zero. We finalize the study with empirical studies supporting our theoretical results. Our results suggest the possibility of improving the robustness of a classifier without sacrificing model accuracy, by only focusing on the normal adversarial risk.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] ON THE ADVERSARIAL ROBUSTNESS OF SUBSPACE LEARNING
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2477 - 2481
  • [42] On the Adversarial Robustness of Robust Estimators
    Lai, Lifeng
    Bayraktar, Erhan
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2020, 66 (08) : 5097 - 5109
  • [43] Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?
    Laugros, Alfred
    Caplier, Alice
    Ospici, Matthieu
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1045 - 1054
  • [44] Adversarial Robustness with Partial Isometry
    Shi-Garrier, Loic
    Bouaynaya, Nidhal Carla
    Delahaye, Daniel
    ENTROPY, 2024, 26 (02)
  • [45] Consistency Regularization for Adversarial Robustness
    Tack, Jihoon
    Yu, Sihyun
    Jeong, Jongheon
    Kim, Minseon
    Hwang, Sung Ju
    Shin, Jinwoo
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8414 - 8422
  • [46] ON THE ADVERSARIAL ROBUSTNESS OF LINEAR REGRESSION
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    PROCEEDINGS OF THE 2020 IEEE 30TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2020,
  • [47] ADVERSARIAL MANIFOLD LEARNING FOR SPEAKER RECOGNITION
    Chien, Jen-Tzung
    Peng, Kang-Ting
    2017 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2017, : 599 - 605
  • [48] Attentive Manifold Mixup for Model Robustness
    Zhou, Zhengbo
    Yang, Jianfei
    PROCEEDINGS OF 2022 THE 6TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, ICMLSC 20222, 2022, : 85 - 91
  • [49] Improving the robustness of steganalysis in the adversarial environment with Generative Adversarial Network
    Peng, Ye
    Yu, Qi
    Fu, Guobin
    Zhang, WenWen
    Duan, ChaoFan
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 82
  • [50] Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
    Li, Xingjian
    Goodman, Dou
    Liu, Ji
    Wei, Tao
    Dou, Dejing
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 4