Sibyl: Explaining Machine Learning Models for High-Stakes Decision Making

被引:2
|
作者
Zytek, Alexandra [1 ]
Liu, Dongyu [1 ]
Vaithianathan, Rhema [2 ]
Veeramachaneni, Kalyan [1 ]
机构
[1] MIT, Cambridge, MA 02139 USA
[2] Auckland Univ Technol, Auckland, New Zealand
基金
美国国家科学基金会;
关键词
machine learning; interpretability; explainability; child welfare; social good; tool;
D O I
10.1145/3411763.3451743
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As machine learning is applied to an increasingly large number of domains, the need for an effective way to explain its predictions grows apace. In the domain of child welfare screening, machine learning offers a promising method of consolidating the large amount of data that screeners must look at, potentially improving the outcomes for children reported to child welfare departments. Interviews and case-studies suggest that adding an explanation alongside the model prediction may result in better outcomes, but it is not obvious what kind of explanation would be most useful in this context. Through a series of interviews and user studies, we developed Sibyl, a machine learning explanation dashboard specifically designed to aid child welfare screeners' decision making. When testing Sibyl, we evaluated four different explanation types, and based on this evaluation, decided a local feature contribution approach was most useful to screeners.
引用
收藏
页数:6
相关论文
共 50 条