On the Adversarial Robustness of Decision Trees and a Symmetry Defense

被引:0
|
作者
Lindqvist, Blerta [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo 02150, Finland
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Perturbation methods; Robustness; Training; Boosting; Accuracy; Threat modeling; Diabetes; Decision trees; Current measurement; Closed box; Adversarial perturbation attacks; adversarial robustness; equivariance; gradient-boosting decision trees; invariance; symmetry defense; XGBoost;
D O I
10.1109/ACCESS.2025.3530695
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gradient-boosting decision tree classifiers (GBDTs) are susceptible to adversarial perturbation attacks that change inputs slightly to cause misclassification. GBDTs are customarily used on non-image datasets that lack inherent symmetries, which might be why data symmetry in the context of GBDT classifiers has not received much attention. In this paper, we show that GBDTs can classify symmetric samples differently, which means that GBDTs lack invariance with respect to symmetry. Based on this, we defend GBDTs against adversarial perturbation attacks using symmetric adversarial samples in order to obtain correct classification. We apply and evaluate the symmetry defense against six adversarial perturbation attacks on the GBDT classifiers of nine datasets with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries. Against zero-knowledge adversaries, we use the feature inversion symmetry and exceed the accuracies of default and robust classifiers by up to 100% points. Against perfect-knowledge adversaries for the GBDT classifier of the F-MNIST dataset, we use the feature inversion and horizontal flip symmetries and exceed the accuracies of default and robust classifiers by up to 96% points. Finally, we show that the current definition of adversarial robustness based on the minimum perturbation values of misclassifying adversarial samples might be inadequate for two reasons. First, this definition assumes that attacks mostly succeed, failing to consider the case when attacks are unable to construct misclassifying adversarial samples against a classifier. Second, GBDT adversarial robustness as currently defined can decrease by training with additional samples, even training samples, which counters the common wisdom that more training samples should increase robustness. With the current definition of GBDT adversarial robustness, we can make GBDTs more adversarially robust by training them with fewer samples! The code is publicly available at https://github.com/blertal/xgboost-symmetry-defense.
引用
收藏
页码:16120 / 16132
页数:13
相关论文
共 50 条
  • [41] On the Convergence and Robustness of Adversarial Training
    Wang, Yisen
    Ma, Xingjun
    Bailey, James
    Yi, Jinfeng
    Zhou, Bowen
    Gu, Quanquan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [42] Adversarial Robustness of Model Sets
    Megyeri, Istvan
    Hegedus, Istvan
    Jelasity, Mark
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [43] On Saliency Maps and Adversarial Robustness
    Mangla, Puneet
    Singh, Vedant
    Balasubramanian, Vineeth N.
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 272 - 288
  • [44] Metric Learning for Adversarial Robustness
    Mao, Chengzhi
    Zhong, Ziyuan
    Yang, Junfeng
    Vondrick, Carl
    Ray, Baishakhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] Dropping Pixels for Adversarial Robustness
    Hosseini, Hossein
    Kannan, Sreeram
    Poovendran, Radha
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 91 - 97
  • [46] Enhancing Adversarial Robustness through Stable Adversarial Training
    Yan, Kun
    Yang, Luyi
    Yang, Zhanpeng
    Ren, Wenjuan
    SYMMETRY-BASEL, 2024, 16 (10):
  • [47] On the Adversarial Robustness of Mixture of Experts
    Puigcerver, Joan
    Jenatton, Rodolphe
    Riquelme, Carlos
    Awasthi, Pranjal
    Bhojanapalli, Srinadh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [48] On the Adversarial Robustness of Hypothesis Testing
    Jin, Yulu
    Lai, Lifeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 515 - 530
  • [49] Explainability and Adversarial Robustness for RNNs
    Hartl, Alexander
    Bachl, Maximilian
    Fabini, Joachim
    Zseby, Tanja
    2020 IEEE SIXTH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING SERVICE AND APPLICATIONS (BIGDATASERVICE 2020), 2020, : 149 - 157
  • [50] Disentangling Adversarial Robustness and Generalization
    Stutz, David
    Hein, Matthias
    Schiele, Bernt
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6969 - 6980