Understanding Survival Models Through Counterfactual Explanations

被引:0
|
作者
Alabdallah, Abdallah [1 ]
Jakubowski, Jakub [2 ]
Pashami, Sepideh [1 ]
Bobek, Szymon [3 ,4 ,5 ]
Ohlsson, Mattias [1 ]
Rognvaldsson, Thorsteinn [1 ]
Nalepa, Grzegorz J. [3 ,4 ,5 ]
机构
[1] Halmstad Univ, Ctr Appl Intelligent Syst Res CAISR, Halmstad, Sweden
[2] AGH Univ Sci & Technol, Dept Appl Comp Sci, Krakow, Poland
[3] Jagiellonian Univ, Fac Phys Astron & Appl Comp Sci, Inst Appl Comp Sci, Krakow, Poland
[4] Jagiellonian Univ, Jagiellonian Human Ctr AI Lab JAHCAI, Krakow, Poland
[5] Jagiellonian Univ, Mark Kac Ctr Complex Syst Res, Krakow, Poland
来源
基金
欧盟地平线“2020”;
关键词
Survival Analysis; Explainable Artificial Intelligence; Survival Patterns; Counterfactual Explanations;
D O I
10.1007/978-3-031-63772-8_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The development of black-box survival models has created a need for methods that explain their outputs, just as in the case of traditional machine learning methods. Survival models usually predict functions rather than point estimates. This special nature of their output makes it more difficult to explain their operation. We propose a method to generate plausible counterfactual explanations for survival models. The method supports two options that handle the special nature of survival models' output. One option relies on the Survival Scores, which are based on the area under the survival function, which is more suitable for proportional hazard models. The other one relies on Survival Patterns in the predictions of the survival model, which represent groups that are significantly different from the survival perspective. This guarantees an intuitive well-defined change from one risk group (Survival Pattern) to another and can handle more realistic cases where the proportional hazard assumption does not hold. The method uses a Particle Swarm Optimization algorithm to optimize a loss function to achieve four objectives: the desired change in the target, proximity to the explained example, likelihood, and the actionability of the counterfactual example. Two predictive maintenance datasets and one medical dataset are used to illustrate the results in different settings. The results show that our method produces plausible counterfactuals, which increase the understanding of black-box survival models.
引用
收藏
页码:310 / 324
页数:15
相关论文
共 50 条
  • [1] Counterfactual Explanations for Models of Code
    Cito, Juergen
    Dillig, Isil
    Murali, Vijayaraghavan
    Chandra, Satish
    [J]. 2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE (ICSE-SEIP 2022), 2022, : 125 - 134
  • [2] Diffusion Models for Counterfactual Explanations
    Jeanneret, Guillaume
    Simon, Loic
    Jurie, Fredric
    [J]. COMPUTER VISION - ACCV 2022, PT VII, 2023, 13847 : 219 - 237
  • [3] Counterfactual Models for Fair and Adequate Explanations
    Asher, Nicholas
    De Lara, Lucas
    Paul, Soumya
    Russell, Chris
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2022, 4 (02): : 371 - 396
  • [4] Counterfactual Explanations for Survival Prediction of Cardiovascular ICU Patients
    Wang, Zhendong
    Samsten, Isak
    Papapetrou, Panagiotis
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE (AIME 2021), 2021, : 338 - 348
  • [5] Stable and actionable explanations of black-box models through factual and counterfactual rules
    Guidotti, Riccardo
    Monreale, Anna
    Ruggieri, Salvatore
    Naretto, Francesca
    Turini, Franco
    Pedreschi, Dino
    Giannotti, Fosca
    [J]. DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (05) : 2825 - 2862
  • [6] Diffusion Models Based Unconditional Counterfactual Explanations Generation
    Zhong, Zhi
    Wang, Yu
    Zhu, Ziye
    Li, Yun
    [J]. Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2024, 37 (11): : 1010 - 1021
  • [7] The Skyline of Counterfactual Explanations for Machine Learning Decision Models
    Wang, Yongjie
    Ding, Qinxu
    Wang, Ke
    Liu, Yue
    Wu, Xingyu
    Wang, Jinglong
    Liu, Yong
    Miao, Chunyan
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2030 - 2039
  • [8] ViCE: Visual Counterfactual Explanations for Machine Learning Models
    Gomez, Oscar
    Holter, Steffen
    Yuan, Jun
    Bertini, Enrico
    [J]. PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2020, 2020, : 531 - 535
  • [9] Counterfactual Explanations for Graph Classification Through the Lenses of Density
    Abrate, Carlo
    Preti, Giulia
    Bonchi, Francesco
    [J]. EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 324 - 348
  • [10] DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models
    Cheng, Furui
    Ming, Yao
    Qu, Huamin
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (02) : 1438 - 1447