On the Adversarial Robustness of Robust Estimators

被引:4
|
作者
Lai, Lifeng [1 ]
Bayraktar, Erhan [2 ]
机构
[1] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
[2] Univ Michigan, Dept Math, Ann Arbor, MI 48104 USA
基金
美国国家科学基金会;
关键词
Robustness; Estimation; Optimization; Principal component analysis; Data analysis; Neural networks; Sociology; Robust estimators; adversarial robustness; M-estimator; non-convex optimization;
D O I
10.1109/TIT.2020.2985966
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Motivated by recent data analytics applications, we study the adversarial robustness of robust estimators. Instead of assuming that only a fraction of the data points are outliers as considered in the classic robust estimation setup, in this paper, we consider an adversarial setup in which an attacker can observe the whole dataset and can modify all data samples in an adversarial manner so as to maximize the estimation error caused by his attack. We characterize the attacker's optimal attack strategy, and further introduce adversarial influence function (AIF) to quantify an estimator's sensitivity to such adversarial attacks. We provide an approach to characterize AIF for any given robust estimator, and then design optimal estimator that minimizes AIF, which implies it is least sensitive to adversarial attacks and hence is most robust against adversarial attacks. From this characterization, we identify a tradeoff between AIF (i.e., robustness against adversarial attack) and influence function, a quantity used in classic robust estimators to measure robustness against outliers, and design estimators that strike a desirable tradeoff between these two quantities.
引用
收藏
页码:5097 / 5109
页数:13
相关论文
共 50 条
  • [1] Exploring Robust Features for Improving Adversarial Robustness
    Wang, Hong
    Deng, Yuefan
    Yoo, Shinjae
    Lin, Yuewei
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (09) : 5141 - 5151
  • [2] Robust Proxy: Improving Adversarial Robustness by Robust Proxy Learning
    Lee, Hong Joo
    Ro, Yong Man
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4021 - 4033
  • [3] Enhancing Adversarial Robustness via Stochastic Robust Framework
    Sun, Zhenjiang
    Li, Yuanbo
    Hu, Cong
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IV, 2024, 14428 : 187 - 198
  • [4] Transferring Adversarial Robustness Through Robust Representation Matching
    Vaishnavi, Pratik
    Eykholt, Kevin
    Rahmati, Amir
    [J]. PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 2083 - 2098
  • [5] Adversarial robustness via robust low rank representations
    Awasthi, Pranjal
    Jain, Himanshu
    Rawat, Ankit Singh
    Vijayaraghavan, Aravindan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [6] Robust nonparametric frontier estimators: Qualitative robustness and influence function
    Daouia, Abdelaati
    Ruiz-Gazen, Anne
    [J]. STATISTICA SINICA, 2006, 16 (04) : 1233 - 1253
  • [7] Revisiting Adversarial Robustness Distillation from the Perspective of Robust Fairness
    Yue, Xinli
    Mou, Ningping
    Wang, Qian
    Zhao, Lingchen
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness
    Blum, Avrim
    Montasser, Omar
    Shakhnarovich, Greg
    Zhang, Hongyang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [10] Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings
    Kireev, Klim
    Andriushchenko, Maksym
    Troncoso, Carmela
    Flammarion, Nicolas
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,