On the Adversarial Robustness of Decision Trees and a Symmetry Defense

被引:0
|
作者
Lindqvist, Blerta [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo 02150, Finland
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Perturbation methods; Robustness; Training; Boosting; Accuracy; Threat modeling; Diabetes; Decision trees; Current measurement; Closed box; Adversarial perturbation attacks; adversarial robustness; equivariance; gradient-boosting decision trees; invariance; symmetry defense; XGBoost;
D O I
10.1109/ACCESS.2025.3530695
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gradient-boosting decision tree classifiers (GBDTs) are susceptible to adversarial perturbation attacks that change inputs slightly to cause misclassification. GBDTs are customarily used on non-image datasets that lack inherent symmetries, which might be why data symmetry in the context of GBDT classifiers has not received much attention. In this paper, we show that GBDTs can classify symmetric samples differently, which means that GBDTs lack invariance with respect to symmetry. Based on this, we defend GBDTs against adversarial perturbation attacks using symmetric adversarial samples in order to obtain correct classification. We apply and evaluate the symmetry defense against six adversarial perturbation attacks on the GBDT classifiers of nine datasets with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries. Against zero-knowledge adversaries, we use the feature inversion symmetry and exceed the accuracies of default and robust classifiers by up to 100% points. Against perfect-knowledge adversaries for the GBDT classifier of the F-MNIST dataset, we use the feature inversion and horizontal flip symmetries and exceed the accuracies of default and robust classifiers by up to 96% points. Finally, we show that the current definition of adversarial robustness based on the minimum perturbation values of misclassifying adversarial samples might be inadequate for two reasons. First, this definition assumes that attacks mostly succeed, failing to consider the case when attacks are unable to construct misclassifying adversarial samples against a classifier. Second, GBDT adversarial robustness as currently defined can decrease by training with additional samples, even training samples, which counters the common wisdom that more training samples should increase robustness. With the current definition of GBDT adversarial robustness, we can make GBDTs more adversarially robust by training them with fewer samples! The code is publicly available at https://github.com/blertal/xgboost-symmetry-defense.
引用
收藏
页码:16120 / 16132
页数:13
相关论文
共 50 条
  • [21] Efficient Training of Robust Decision Trees Against Adversarial Examples
    Vos, Daniel
    Verwer, Sicco
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7599 - 7608
  • [22] ROBY: Evaluating the adversarial robustness of a deep model by its decision boundaries
    Jin, Haibo
    Chen, Jinyin
    Zheng, Haibin
    Wang, Zhen
    Xiao, Jun
    Yu, Shanqing
    Ming, Zhaoyan
    INFORMATION SCIENCES, 2022, 587 : 97 - 122
  • [23] ROBY: Evaluating the adversarial robustness of a deep model by its decision boundaries
    Jin, Haibo
    Chen, Jinyin
    Zheng, Haibin
    Wang, Zhen
    Xiao, Jun
    Yu, Shanqing
    Ming, Zhaoyan
    Information Sciences, 2022, 587 : 97 - 122
  • [24] Improving Adversarial Robustness With Adversarial Augmentations
    Chen, Chuanxi
    Ye, Dengpan
    He, Yiheng
    Tang, Long
    Xu, Yue
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (03) : 5105 - 5117
  • [25] Enhancing Robustness of Weather Removal: Preprocessing-Based Defense Against Adversarial Attacks
    Frants, Vladimir
    Agaian, Sos
    MULTIMODAL IMAGE EXPLOITATION AND LEARNING 2024, 2024, 13033
  • [26] Enhancing robustness of person detection: A universal defense filter against adversarial patch attacks
    Mao, Zimin
    Chen, Shuiyan
    Miao, Zhuang
    Li, Heng
    Xia, Beihao
    Cai, Junzhe
    Yuan, Wei
    You, Xinge
    COMPUTERS & SECURITY, 2024, 146
  • [27] Adversarial Robustness for Code
    Bielik, Pavol
    Vechev, Martin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [28] Adversarial Robustness Curves
    Goepfert, Christina
    Goepfert, Jan Philip
    Hammer, Barbara
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 1167 : 172 - 179
  • [29] Studies of stability and robustness for artificial neural networks and boosted decision trees
    Yang, Hai-Jun
    Roe, Byron P.
    Zhu, Ji
    NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT, 2007, 574 (02): : 342 - 349
  • [30] The Adversarial Robustness of Sampling
    Ben-Eliezer, Omri
    Yogev, Eylon
    PODS'20: PROCEEDINGS OF THE 39TH ACM SIGMOD-SIGACT-SIGAI SYMPOSIUM ON PRINCIPLES OF DATABASE SYSTEMS, 2020, : 49 - 62