On the Adversarial Robustness of Decision Trees and a Symmetry Defense

被引:0
|
作者
Lindqvist, Blerta [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo 02150, Finland
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Perturbation methods; Robustness; Training; Boosting; Accuracy; Threat modeling; Diabetes; Decision trees; Current measurement; Closed box; Adversarial perturbation attacks; adversarial robustness; equivariance; gradient-boosting decision trees; invariance; symmetry defense; XGBoost;
D O I
10.1109/ACCESS.2025.3530695
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gradient-boosting decision tree classifiers (GBDTs) are susceptible to adversarial perturbation attacks that change inputs slightly to cause misclassification. GBDTs are customarily used on non-image datasets that lack inherent symmetries, which might be why data symmetry in the context of GBDT classifiers has not received much attention. In this paper, we show that GBDTs can classify symmetric samples differently, which means that GBDTs lack invariance with respect to symmetry. Based on this, we defend GBDTs against adversarial perturbation attacks using symmetric adversarial samples in order to obtain correct classification. We apply and evaluate the symmetry defense against six adversarial perturbation attacks on the GBDT classifiers of nine datasets with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries. Against zero-knowledge adversaries, we use the feature inversion symmetry and exceed the accuracies of default and robust classifiers by up to 100% points. Against perfect-knowledge adversaries for the GBDT classifier of the F-MNIST dataset, we use the feature inversion and horizontal flip symmetries and exceed the accuracies of default and robust classifiers by up to 96% points. Finally, we show that the current definition of adversarial robustness based on the minimum perturbation values of misclassifying adversarial samples might be inadequate for two reasons. First, this definition assumes that attacks mostly succeed, failing to consider the case when attacks are unable to construct misclassifying adversarial samples against a classifier. Second, GBDT adversarial robustness as currently defined can decrease by training with additional samples, even training samples, which counters the common wisdom that more training samples should increase robustness. With the current definition of GBDT adversarial robustness, we can make GBDTs more adversarially robust by training them with fewer samples! The code is publicly available at https://github.com/blertal/xgboost-symmetry-defense.
引用
收藏
页码:16120 / 16132
页数:13
相关论文
共 50 条
  • [1] Robustness analysis of classical and fuzzy decision trees under adversarial evasion attack
    Chan, Patrick P. K.
    Zheng, Juan
    Liu, Han
    Tsang, E. C. C.
    Yeung, Daniel S.
    APPLIED SOFT COMPUTING, 2021, 107
  • [2] IDEA: Invariant defense for graph adversarial robustness
    Tao, Shuchang
    Cao, Qi
    Shen, Huawei
    Wu, Yunfan
    Xu, Bingbing
    Cheng, Xueqi
    INFORMATION SCIENCES, 2024, 680
  • [3] Attack as Defense: Characterizing Adversarial Examples using Robustness
    Zhao, Zhe
    Chen, Guangke
    Wang, Jingyi
    Yang, Yiwei
    Song, Fu
    Sun, Jun
    ISSTA '21: PROCEEDINGS OF THE 30TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2021, : 42 - 55
  • [4] Enhancing Tracking Robustness with Auxiliary Adversarial Defense Networks
    Wu, Zhewei
    Yu, Ruilong
    Liu, Qihe
    Cheng, Shuying
    Qiu, Shilin
    Zhou, Shijie
    COMPUTER VISION-ECCV 2024, PT XLVI, 2025, 15104 : 198 - 214
  • [5] Deep Defense: Training DNNs with Improved Adversarial Robustness
    Yan, Ziang
    Guo, Yiwen
    Zhang, Changshui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [6] Genetic Adversarial Training of Decision Trees
    Ranzato, Francesco
    Zanella, Marco
    PROCEEDINGS OF THE 2021 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'21), 2021, : 358 - 367
  • [7] Symmetry Defense Against CNN Adversarial Perturbation Attacks
    Lindqvist, Blerta
    INFORMATION SECURITY, ISC 2023, 2023, 14411 : 142 - 160
  • [8] Detection and Defense: Student-Teacher Network for Adversarial Robustness
    Park, Kyoungchan
    Kang, Pilsung
    IEEE ACCESS, 2024, 12 : 82742 - 82752
  • [9] Achieving Fairness with Decision Trees: An Adversarial Approach
    Vincent Grari
    Boris Ruf
    Sylvain Lamprier
    Marcin Detyniecki
    Data Science and Engineering, 2020, 5 : 99 - 110
  • [10] Robust Decision Trees Against Adversarial Examples
    Chen, Hongge
    Zhang, Huan
    Boning, Duane
    Hsieh, Cho-Jui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97