On the Adversarial Robustness of Decision Trees and a Symmetry Defense

被引:0
|
作者
Lindqvist, Blerta [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo 02150, Finland
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Perturbation methods; Robustness; Training; Boosting; Accuracy; Threat modeling; Diabetes; Decision trees; Current measurement; Closed box; Adversarial perturbation attacks; adversarial robustness; equivariance; gradient-boosting decision trees; invariance; symmetry defense; XGBoost;
D O I
10.1109/ACCESS.2025.3530695
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gradient-boosting decision tree classifiers (GBDTs) are susceptible to adversarial perturbation attacks that change inputs slightly to cause misclassification. GBDTs are customarily used on non-image datasets that lack inherent symmetries, which might be why data symmetry in the context of GBDT classifiers has not received much attention. In this paper, we show that GBDTs can classify symmetric samples differently, which means that GBDTs lack invariance with respect to symmetry. Based on this, we defend GBDTs against adversarial perturbation attacks using symmetric adversarial samples in order to obtain correct classification. We apply and evaluate the symmetry defense against six adversarial perturbation attacks on the GBDT classifiers of nine datasets with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries. Against zero-knowledge adversaries, we use the feature inversion symmetry and exceed the accuracies of default and robust classifiers by up to 100% points. Against perfect-knowledge adversaries for the GBDT classifier of the F-MNIST dataset, we use the feature inversion and horizontal flip symmetries and exceed the accuracies of default and robust classifiers by up to 96% points. Finally, we show that the current definition of adversarial robustness based on the minimum perturbation values of misclassifying adversarial samples might be inadequate for two reasons. First, this definition assumes that attacks mostly succeed, failing to consider the case when attacks are unable to construct misclassifying adversarial samples against a classifier. Second, GBDT adversarial robustness as currently defined can decrease by training with additional samples, even training samples, which counters the common wisdom that more training samples should increase robustness. With the current definition of GBDT adversarial robustness, we can make GBDTs more adversarially robust by training them with fewer samples! The code is publicly available at https://github.com/blertal/xgboost-symmetry-defense.
引用
收藏
页码:16120 / 16132
页数:13
相关论文
共 50 条
  • [31] Defending Against Adversarial Examples via Soft Decision Trees Embedding
    Hua, Yingying
    Ge, Shiming
    Gao, Xindi
    Jin, Xin
    Zeng, Dan
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2106 - 2114
  • [32] Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
    Andriushchenko, Maksym
    Hein, Matthias
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [33] Improvement of timetable robustness by analysis of drivers' operation based on decision trees
    Ochiai, Yasufumi
    Masuma, Yoshiki
    Tomii, Norio
    JOURNAL OF RAIL TRANSPORT PLANNING & MANAGEMENT, 2019, 9 : 57 - 65
  • [34] Adversarial attacks and adversarial robustness in computational pathology
    Narmin Ghaffari Laleh
    Daniel Truhn
    Gregory Patrick Veldhuizen
    Tianyu Han
    Marko van Treeck
    Roman D. Buelow
    Rupert Langer
    Bastian Dislich
    Peter Boor
    Volkmar Schulz
    Jakob Nikolas Kather
    Nature Communications, 13
  • [35] Adversarial attacks and adversarial robustness in computational pathology
    Ghaffari Laleh, Narmin
    Truhn, Daniel
    Veldhuizen, Gregory Patrick
    Han, Tianyu
    van Treeck, Marko
    Buelow, Roman D.
    Langer, Rupert
    Dislich, Bastian
    Boor, Peter
    Schulz, Volkmar
    Kather, Jakob Nikolas
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [36] Recent Advances in Adversarial Training for Adversarial Robustness
    Bai, Tao
    Luo, Jinqi
    Zhao, Jun
    Wen, Bihan
    Wang, Qian
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4312 - 4321
  • [37] Robustness Tokens: Towards Adversarial Robustness of Transformers
    Pulfer, Brian
    Belousov, Yury
    Voloshynovskiy, Slava
    COMPUTER VISION - ECCV 2024, PT LIX, 2025, 15117 : 110 - 127
  • [38] Stylized Adversarial Defense
    Naseer, Muzammal
    Khan, Salman
    Hayat, Munawar
    Khan, Fahad Shahbaz
    Porikli, Fatih
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 6403 - 6414
  • [39] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [40] EXPLOITING DOUBLY ADVERSARIAL EXAMPLES FOR IMPROVING ADVERSARIAL ROBUSTNESS
    Byun, Junyoung
    Go, Hyojun
    Cho, Seungju
    Kim, Changick
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1331 - 1335