Robustness of Classifiers to Changing Environments

被引:0
|
作者
Abbasian, Houman [1 ]
Drummond, Chris [2 ]
Japkowicz, Nathalie [1 ]
Matwin, Stan [1 ,3 ]
机构
[1] Univ Ottawa, Sch Informat Technol & Engn, Ottawa, ON K1N 6N5, Canada
[2] Natl Res Council Canada, Inst Informat Technol, Ottawa, ON K1A 0R6, Canada
[3] Polish Acad Sci, Inst Comp Sci, Warsaw, Poland
基金
加拿大自然科学与工程研究理事会;
关键词
Classifier evaluation; changing environments; classifier robustness; CLASSIFICATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we test some of the most commonly used classifiers to identify which ones are the most robust to changing environments. The environment may change over time due to some contextual or definitional changes. The environment may change with location. It would be surprising if the performance of common classifiers did not degrade with these changes. The question, we address here, is whether or not some types of classifier are inherently more immune than others to these effects. In this study, we simulate the changing of environment by reducing the influence on the class of the most significant attributes. Based on our analysis, K-Nearest Neighbor and Artificial Neural Networks are the most robust learners, ensemble algorithms are somewhat robust, whereas Naive Bayes, Logistic Regression and particularly Decision Trees are the most affected.
引用
收藏
页码:232 / +
页数:3
相关论文
共 50 条
  • [1] On the robustness of populations in changing environments
    Payne, RJH
    [J]. JOURNAL OF THEORETICAL BIOLOGY, 1997, 185 (04) : 553 - 555
  • [2] Ensemble based incremental SVM classifiers for changing environments
    Yalcin, Aycan
    Erdem, Zeki
    Gurgen, Fikret
    [J]. 2007 22ND INTERNATIONAL SYMPOSIUM ON COMPUTER AND INFORMATION SCIENCES, 2007, : 203 - +
  • [3] ON THE COARSE ROBUSTNESS OF CLASSIFIERS
    Alkhouri, Ismail R.
    Bak, Stanley
    Velasquez, Alvaro
    Atia, George K.
    [J]. 2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 569 - 573
  • [4] On the Robustness of Offensive Language Classifiers
    Rusert, Jonathan
    Shafiq, Zubair
    Srinivasan, Padmini
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 7424 - 7438
  • [5] Robustness Verification of Quantum Classifiers
    Guan, Ji
    Fang, Wang
    Ying, Mingsheng
    [J]. COMPUTER AIDED VERIFICATION (CAV 2021), PT I, 2021, 12759 : 151 - 174
  • [6] Assessing the robustness of neural network classifiers
    Marin, JA
    Ray, CK
    Brockhaus, J
    Klingseisen, R
    [J]. 1998 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-5, 1998, : 4258 - 4263
  • [7] Certified Distributional Robustness on Smoothed Classifiers
    Yang, Jungang
    Xiang, Liyao
    Chu, Pengzhi
    Wang, Xinbing
    Zhou, Chenghu
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) : 876 - 888
  • [8] On robustness of neural ODEs image classifiers
    Cui, Wenjun
    Zhang, Honglei
    Chu, Haoyu
    Hu, Pipi
    Li, Yidong
    [J]. INFORMATION SCIENCES, 2023, 632 : 576 - 593
  • [9] On the robustness of randomized classifiers to adversarial examples
    Pinot, Rafael
    Meunier, Laurent
    Yger, Florian
    Gouy-Pailler, Cedric
    Chevaleyre, Yann
    Atif, Jamal
    [J]. MACHINE LEARNING, 2022, 111 (09) : 3425 - 3457
  • [10] Analysis of classifiers' robustness to adversarial perturbations
    Fawzi, Alhussein
    Fawzi, Omar
    Frossard, Pascal
    [J]. MACHINE LEARNING, 2018, 107 (03) : 481 - 508