Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models

被引:9
|
作者
Mishra S. [1 ]
Rzeszotarski J.M. [1 ]
机构
[1] Cornell University, United States
关键词
Classification; Concepts; Explanations; Machine learning;
D O I
10.1145/3449213
中图分类号
学科分类号
摘要
An important challenge in building explainable artificially intelligent (AI) systems is designing interpretable explanations. AI models often use low-level data features which may be hard for humans to interpret. Recent research suggests that situating machine decisions in abstract, human understandable concepts can help. However, it is challenging to determine the right level of conceptual mapping. In this research, we explore granularity (of data features) and context (of data instances) as dimensions underpinning conceptual mappings. Based on these measures, we explore strategies for designing explanations in classification models. We introduce an end-to-end concept elicitation pipeline that supports gathering high-level concepts for a given data set. Through crowd-sourced experiments, we examine how providing conceptual information shapes the effectiveness of explanations, finding that a balance between coarse and fine-grained explanations help users better estimate model predictions. We organize our findings into systematic themes that can inform design considerations for future systems. © 2021 ACM.
引用
收藏
相关论文
共 50 条
  • [21] ViCE: Visual Counterfactual Explanations for Machine Learning Models
    Gomez, Oscar
    Holter, Steffen
    Yuan, Jun
    Bertini, Enrico
    [J]. PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2020, 2020, : 531 - 535
  • [22] Concept formation and concept-driven discrimination in the Wisconsin card sorting test (WCST)
    Schleifer, L
    Bosel, R
    [J]. JOURNAL OF PSYCHOPHYSIOLOGY, 1995, 9 (04) : 372 - 372
  • [23] PrivacyToon: Concept-driven Storytelling with Creativity Support for Privacy Concepts
    Suh, Sangho
    Lamorea, Sydney
    Law, Edith
    Zhang-Kennedy, Leah
    [J]. PROCEEDINGS OF THE 2022 ACM DESIGNING INTERACTIVE SYSTEMS CONFERENCE, DIS 2022, 2022, : 41 - 57
  • [24] A Concept-Driven Construction of the Mondex Protocol Using Three Refinements
    Schellhorn, Gerhard
    Banach, Richard
    [J]. ABSTRACT STATE MACHINES, B AND Z, PROCEEDINGS, 2008, 5238 : 57 - +
  • [25] Concept-Driven Multi-Modality Fusion for Video Search
    Wei, Xiao-Yong
    Jiang, Yu-Gang
    Ngo, Chong-Wah
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2011, 21 (01) : 62 - 73
  • [26] Concept-driven trial and error to find out new functions of nanocellulose
    Kitaoka, Takuya
    [J]. ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2019, 257
  • [27] Provenance-based Explanations for Machine Learning (ML) Models
    Turnau, Justin
    Akwari, Nkechi
    Lee, Seokki
    Rajput, Dwarkesh
    [J]. 2023 IEEE 39TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING WORKSHOPS, ICDEW, 2023, : 40 - 43
  • [28] DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models
    Cheng, Furui
    Ming, Yao
    Qu, Huamin
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (02) : 1438 - 1447
  • [29] Crowdsourcing for Evaluating Machine Translation Quality
    Goto, Shinsuke
    Lin, Donghui
    Ishida, Toru
    [J]. LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 3456 - 3463
  • [30] Evaluating Quality of Visual Explanations of Deep Learning Models for Vision Tasks
    Yang, Yuqing
    Mahmoudpour, Saeed
    Schelkens, Peter
    Deligiannis, Nikos
    [J]. 2023 15TH INTERNATIONAL CONFERENCE ON QUALITY OF MULTIMEDIA EXPERIENCE, QOMEX, 2023, : 159 - 164