Directive Explanations for Actionable Explainability in Machine Learning Applications

被引:0
|
作者
Singh, Ronal [1 ]
Miller, Tim [1 ]
Lyons, Henrietta [1 ]
Sonenberg, Liz [1 ]
Velloso, Eduardo [1 ]
Vetere, Frank [1 ]
Howe, Piers [2 ]
Dourish, Paul [3 ]
机构
[1] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic 3010, Australia
[2] Univ Melbourne, Melbourne Sch Psychol Sci, Melbourne, Vic 3010, Australia
[3] Univ Calif Irvine, Donald Bren Sch Informat & Comp Sci, Irvine, CA 92697 USA
基金
澳大利亚研究理事会;
关键词
Explainable AI; directive explanations; counterfactual explanations; BLACK-BOX;
D O I
10.1145/3579363
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people's preference for and perception toward directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centered and context-specific approach to explainable AI.
引用
收藏
页数:26
相关论文
共 50 条
  • [21] A social evaluation of the perceived goodness of explainability in machine learning
    Wanner, Jonas
    Herm, Lukas-Valentin
    Heinrich, Kai
    Janiesch, Christian
    JOURNAL OF BUSINESS ANALYTICS, 2022, 5 (01) : 29 - 50
  • [22] Making Machine Learning Accessible and Actionable for Clinicians
    Schneider, David F.
    JAMA NETWORK OPEN, 2019, 2 (12)
  • [23] Towards Explainability in Machine Learning: The Formal Methods Way
    Gossen, Frederik
    Margaria, Tiziana
    Steffen, Bernhard
    IT PROFESSIONAL, 2020, 22 (04) : 8 - 12
  • [24] Explanation sets: A general framework for machine learning explainability
    Fernandez, Ruben R.
    de Diego, Isaac Martin
    Moguerza, Javier M.
    Herrera, Francisco
    INFORMATION SCIENCES, 2022, 617 : 464 - 481
  • [25] A-XAI: adversarial machine learning for trustable explainability
    Nishita Agrawal
    Isha Pendharkar
    Jugal Shroff
    Jatin Raghuvanshi
    Akashdip Neogi
    Shruti Patil
    Rahee Walambe
    Ketan Kotecha
    AI and Ethics, 2024, 4 (4): : 1143 - 1174
  • [26] Interpretability and Explainability of Machine Learning Models: Achievements and Challenges
    Henriques, J.
    Rocha, T.
    de Carvalho, P.
    Silva, C.
    Paredes, S.
    INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2022, ICBHI 2022, 2024, 108 : 81 - 94
  • [27] Strategic Predictions and Explanations By Machine Learning
    Wu, Caesar
    Li, Jian
    Xu, Jingjing
    Bouvry, Pascal
    38TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN 2024, 2024, : 268 - 273
  • [28] MACHINE LEARNING - EXPLANATIONS VERSUS NUMBERS
    KODRATOFF, Y
    ANNALES DES TELECOMMUNICATIONS-ANNALS OF TELECOMMUNICATIONS, 1989, 44 (5-6): : 251 - 264
  • [29] Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
    Herm, Lukas-Valentin
    Heinrich, Kai
    Wanner, Jonas
    Janiesch, Christian
    INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2023, 69
  • [30] FACE: Feasible and Actionable Counterfactual Explanations
    Poyiadzi, Rafael
    Sokol, Kacper
    Santos-Rodriguez, Raul
    De Bie, Tijl
    Flach, Peter
    PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, : 344 - 350