Expanding Explainability: Towards Social Transparency in AI systems

被引:112
|
作者
Ehsan, Upol [1 ]
Liao, Q. Vera [2 ]
Muller, Michael [2 ]
Riedl, Mark O. [1 ]
Weisz, Justin D. [2 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] IBM Res AI, Yorktown Hts, NY USA
基金
美国国家科学基金会;
关键词
Explainable AI; social transparency; human-AI interaction; explanations; Artificial Intelligence; sociotechnical; socio-organizational context; TRANSACTIVE MEMORY; ORGANIZATIONS; FRAMEWORK; COMMUNICATION; CREDIBILITY; DESIGN; TRUST; MEDIA;
D O I
10.1145/3411764.3445188
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Transparency and Explainability of AI Systems: Ethical Guidelines in Practice
    Balasubramaniam, Nagadivya
    Kauppinen, Marjo
    Hiekkanen, Kari
    Kujala, Sari
    [J]. REQUIREMENTS ENGINEERING: FOUNDATION FOR SOFTWARE QUALITY, REFSQ 2022, 2022, 13216 : 3 - 18
  • [2] Transparency and explainability of AI systems: From ethical guidelines to requirements
    Balasubramaniam, Nagadivya
    Kauppinen, Marjo
    Rannisto, Antti
    Hiekkanen, Kari
    Kujala, Sari
    [J]. INFORMATION AND SOFTWARE TECHNOLOGY, 2023, 159
  • [3] Bias, Explainability, Transparency, and Trust for AI-Enabled Military Systems
    Pace, Teresa
    Ranesb, Bryan
    [J]. ASSURANCE AND SECURITY FOR AI-ENABLED SYSTEMS, 2024, 13054
  • [4] Towards Explainability for AI Fairness
    Zhou, Jianlong
    Chen, Fang
    Holzinger, Andreas
    [J]. XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, 2022, 13200 : 375 - 386
  • [5] Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems
    Govea, Jaime
    Gutierrez, Rommel
    Villegas-Ch, William
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [6] Towards Trust, Transparency, and Liability in AI/AS Systems
    Thelisson, Eva
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 5215 - 5216
  • [7] Establishing Appropriate Trust in AI through Transparency and Explainability
    Kim, Sunnie S. Y.
    [J]. EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [8] THE PARADOX OF TRANSPARENCY IN AI: OPACITY AND EXPLAINABILITY. ALLOCATION OF RESPONSABILITY
    Blazquez Ruiz, F. Javier
    [J]. REVISTA INTERNACIONAL DE PENSAMIENTO POLITICO, 2022, (17): : 261 - 272
  • [9] Keynote: Towards Explainability in AI and Multimedia Research
    Chua, Tat-Seng
    [J]. ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 1 - 1
  • [10] Supporting Human-AI Teams:Transparency, explainability, and situation awareness
    Endsley, Mica R.
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2023, 140