Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)

被引:1
|
作者
Ehsan, Upol [1 ]
Watkins, Elizabeth Anne [2 ]
Wintersberger, Philipp [3 ,4 ]
Manger, Carina [5 ]
Kim, Sunnie S. Y. [6 ]
Van Berkel, Niels [7 ]
Riener, Andreas [5 ]
Riedl, Mark O. [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Intelligent Syst Res, Intel Labs, Thousand Oaks, CA USA
[3] Univ Appl Sci Upper Austria, Wels, Austria
[4] TU Wien, Vienna, Austria
[5] Tech Hsch Ingolstadt THI, Ingolstadt, Bavaria, Germany
[6] Princeton Univ, Princeton, NJ USA
[7] Aalborg Univ, Aalborg, Denmark
来源
EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024 | 2024年
关键词
D O I
10.1145/3613905.3636311
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. Explainability of AI is more than just "opening" the black box - who opens it matters just as much, if not more, as the ways of opening it. In the era of Large Language Models (LLMs), is "opening the black box" still a realistic goal for XAI? In this fourth CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the previous three installments to craft the coming-of-age story of HCXAI in the era of Large Language Models (LLMs). We aim towards actionable interventions that recognize both affordances and pitfalls of XAI. The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize "operationalizing." We seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] In-IDE Human-AI Experience in the Era of Large Language Models; A Literature Review
    Sergeyuk, Agnia
    Titov, Sergey
    Izadi, Maliheh
    PROCEEDINGS OF THE 2024 FIRST IDE WORKSHOP, IDE 2024, 2024, : 95 - 100
  • [32] AI for crop production - Where can large language models (LLMs) provide substantial value?
    Kuska, Matheus Thomas
    Wahabzada, Mirwaes
    Paulus, Stefan
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 221
  • [33] Proposed Explainable Interference Control Technique in 6G Networks Using Large Language Models (LLMs)
    Tahir, H. Ahmed
    Alayed, Walaa
    Ul Hassan, Waqar
    Haider, Amir
    ELECTRONICS, 2024, 13 (22)
  • [34] Detecting Homophobic Speech in Soccer Tweets Using Large Language Models and Explainable AI
    Santos, Guto Leoni
    dos Santos, Vitor Gaboardi
    Kearns, Colm
    Sinclair, Gary
    Black, Jack
    Doidge, Mark
    Fletcher, Thomas
    Kilvington, Dan
    Liston, Katie
    Endo, Patricia Takako
    Lynn, Theo
    SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2024, PT I, 2025, 15211 : 489 - 504
  • [35] Human-Centered AI for Discovering Student Engagement Profiles on Large-Scale Educational Assessments
    Guo, Hongwen
    Johnson, Matthew
    Saldivia, Luis
    Worthington, Michelle
    Ercikan, Kadriye
    JOURNAL OF MEASUREMENT AND EVALUATION IN EDUCATION AND PSYCHOLOGY-EPOD, 2024, 15 : 282 - 301
  • [36] The form of AI-driven luxury: how generative AI (GAI) and Large Language Models (LLMs) are transforming the creative process
    Pantano, Eleonora
    Serravalle, Francesca
    Priporas, Constantinos-Vasilios
    JOURNAL OF MARKETING MANAGEMENT, 2024,
  • [37] Large Language Models in Health Care: Charting a Path Toward Accurate, Explainable, and Secure AI
    Khullar, Dhruv
    Wang, Xingbo
    Wang, Fei
    JOURNAL OF GENERAL INTERNAL MEDICINE, 2024, 39 (07) : 1239 - 1241
  • [38] PU-GEN: Enhancing generative commonsense reasoning for language models with human-centered knowledge
    Seo, Jaehyung
    Oh, Dongsuk
    Eo, Sugyeong
    Park, Chanjun
    Yang, Kisu
    Moon, Hyeonseok
    Park, Kinam
    Lim, Heuiseok
    KNOWLEDGE-BASED SYSTEMS, 2022, 256
  • [39] Evaluating How Explainable AI Is Perceived in the Medical Domain: A Human-Centered Quantitative Study of XAI in Chest X-Ray Diagnostics
    Karagoz, Gizem
    van Kollenburg, Geert
    Ozcelebi, Tanir
    Meratnia, Nirvana
    TRUSTWORTHY ARTIFICIAL INTELLIGENCE FOR HEALTHCARE, TAI4H 2024, 2024, 14812 : 92 - 108
  • [40] Using Large Language Models to Compare Explainable Models for Smart Home Human Activity Recognition
    Fiori, Michele
    Civitarese, Gabriele
    Bettini, Claudio
    COMPANION OF THE 2024 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, UBICOMP COMPANION 2024, 2024, : 881 - 884