Assessing and comparing interpretability techniques for artificial neural networks breast cancer classication

被引:12
|
作者
Hakkoum, Hajar [1 ]
Idri, Ali [1 ,2 ]
Abnane, Ibtissam [1 ]
机构
[1] Mohammed V Univ Rabat, ENSIAS, Software Project Management Res Team, Rabat, Morocco
[2] Mohammed VI Polytech Univ, MSDA, Ben Guerir, Morocco
关键词
Interpretability; explainability; breast cancer; diagnosis; LIME; Partial Dependence Plot; features importance;
D O I
10.1080/21681163.2021.1901784
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Breast cancer is the most common type of cancer among women. Thankfully, early detection and treatment improvements helped decrease the number of deaths. Data Mining techniques have always assisted BC tasks whether it is screening, diagnosis, prognosis, treatment, monitoring, and/or management. Nowadays, the use of Data Mining is witnessing a new era. In fact, the main objective is no longer to replace humans but to enhance their capabilities, which is why Artificial Intelligence is now referred to as Intelligence Augmentation. In this context, interpretability is used to help domain experts learn new patterns and machine learning experts debug their models. This paper aims to investigate three black-boxes interpretation techniques: Feature Importance, Partial Dependence Plot, and LIME when applied to two types of feed-forward Artificial Neural Networks: Multilayer perceptrons, and Radial Basis Function Network, trained on the Wisconsin Original dataset for breast cancer diagnosis. Results showed that local LIME explanations were instance-level interpretations that came in line with the global interpretations of the other two techniques. Global/local interpretability techniques can thus be combined to define the trustworthiness of a black-box model.
引用
收藏
页码:587 / 599
页数:13
相关论文
共 50 条
  • [1] Interpretability of Artificial Hydrocarbon Networks for Breast Cancer Classification
    Ponce, Hiram
    de Lourdes Martinez-Villasenor, Ma
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 3535 - 3542
  • [2] On Interpretability of Artificial Neural Networks: A Survey
    Fan, Feng-Lei
    Xiong, Jinjun
    Li, Mengzhou
    Wang, Ge
    IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES, 2021, 5 (06) : 741 - 760
  • [3] Skin removal techniques for breast cancer radar detection based on Artificial Neural Networks
    Caorsi, Salvatore
    Lenzi, Claudio
    2015 IEEE 15TH MEDITERRANEAN MICROWAVE SYMPOSIUM (MMS), 2015,
  • [4] Breast cancer diagnosis based on a suitable combination of deformable models and artificial neural networks techniques
    Lopez, Yosvany
    Novoa, Andra
    Guevara, Miguel A.
    Silva, Augusto
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS AND APPLICATIONS, PROCEEDINGS, 2007, 4756 : 803 - 811
  • [5] ARTIFICIAL NEURAL NETWORKS AND BREAST-CANCER PROGNOSIS
    DESILVA, CJS
    CHOONG, PL
    ATTIKIOUZEL, Y
    AUSTRALIAN COMPUTER JOURNAL, 1994, 26 (03): : 78 - 81
  • [6] Prediction of Breast Cancer Using Artificial Neural Networks
    Ismail Saritas
    Journal of Medical Systems, 2012, 36 : 2901 - 2907
  • [7] Prediction of Breast Cancer Using Artificial Neural Networks
    Saritas, Ismail
    JOURNAL OF MEDICAL SYSTEMS, 2012, 36 (05) : 2901 - 2907
  • [8] Artificial neural networks for detection of breast cancer in mammography
    Diahi, JG
    Frouge, C
    Giron, A
    Fertil, B
    DIGITAL MAMMOGRAPHY '96, 1996, 1119 : 329 - 334
  • [9] Comparing conventional techniques and artificial neural networks for predicting the risk of business failure
    Wu, XD
    Flitman, A
    PROGRESS IN CONNECTIONIST-BASED INFORMATION SYSTEMS, VOLS 1 AND 2, 1998, : 1291 - 1294
  • [10] Assessing Efficiency in Artificial Neural Networks
    Schaub, Nicholas J.
    Hotaling, Nathan
    APPLIED SCIENCES-BASEL, 2023, 13 (18):