Sibyl: Explaining Machine Learning Models for High-Stakes Decision Making

被引:2
|
作者
Zytek, Alexandra [1 ]
Liu, Dongyu [1 ]
Vaithianathan, Rhema [2 ]
Veeramachaneni, Kalyan [1 ]
机构
[1] MIT, Cambridge, MA 02139 USA
[2] Auckland Univ Technol, Auckland, New Zealand
基金
美国国家科学基金会;
关键词
machine learning; interpretability; explainability; child welfare; social good; tool;
D O I
10.1145/3411763.3451743
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As machine learning is applied to an increasingly large number of domains, the need for an effective way to explain its predictions grows apace. In the domain of child welfare screening, machine learning offers a promising method of consolidating the large amount of data that screeners must look at, potentially improving the outcomes for children reported to child welfare departments. Interviews and case-studies suggest that adding an explanation alongside the model prediction may result in better outcomes, but it is not obvious what kind of explanation would be most useful in this context. Through a series of interviews and user studies, we developed Sibyl, a machine learning explanation dashboard specifically designed to aid child welfare screeners' decision making. When testing Sibyl, we evaluated four different explanation types, and based on this evaluation, decided a local feature contribution approach was most useful to screeners.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making
    Zytek, Alexandra
    Liu, Dongyu
    Vaithianathan, Rhema
    Veeramachaneni, Kalyan
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (01) : 1161 - 1171
  • [2] Introduction to high-stakes IS risk and decision-making minitrack
    Port, Dan
    Wilf, Joel
    Proceedings of the Annual Hawaii International Conference on System Sciences, 2019, 2019-January
  • [3] Introduction to High-Stakes IS Risk and Decision-Making Minitrack
    Port, Dan
    Wilf, Joel
    PROCEEDINGS OF THE 52ND ANNUAL HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES, 2019, : 7352 - 7352
  • [4] Enabling Big Data and Machine Learning Applications in High-Stakes Environments
    Dahdal, Simon
    Tortonesi, Mauro
    PROCEEDINGS OF 2024 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, NOMS 2024, 2024,
  • [5] Machine vs Machine: Large Language Models (LLMs) in Applied Machine Learning High-Stakes Open-Book Exams
    Quille, Keith
    Alattyanyi, Csanad
    Becker, Brett A.
    Faherty, Roisin
    Gordon, Damian
    Harte, Miriam
    Hensman, Svetlana
    Hofmann, Markus
    Garcia, Jorge Jimenez
    Kuznetsov, Anthony
    Marais, Conrad
    Nolan, Keith
    Nicolai, Cianan
    O'Leary, Ciaran
    Zero, Andrzej
    RED-REVISTA DE EDUCACION A DISTANCIA, 2024, 24 (78):
  • [6] Integrating Risk-Averse and Constrained Reinforcement Learning for Robust Decision-Making in High-Stakes Scenarios
    Ahmad, Moiz
    Ramzan, Muhammad Babar
    Omair, Muhammad
    Habib, Muhammad Salman
    MATHEMATICS, 2024, 12 (13)
  • [8] Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
    Cynthia Rudin
    Nature Machine Intelligence, 2019, 1 : 206 - 215
  • [9] Foresight, risk attitude, and utility maximization in naturalistic sequential high-stakes decision making
    Chen, Zhiqin
    John, Richard S.
    JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2018, 86 : 41 - 50
  • [10] SEVEN RELIABILITY INDICES FOR HIGH-STAKES DECISION MAKING: DESCRIPTION, SELECTION, AND SIMPLE CALCULATION
    Smith, Stacey L.
    Vannest, Kimberly J.
    Davis, John L.
    PSYCHOLOGY IN THE SCHOOLS, 2011, 48 (10) : 1064 - 1075