Towards Responsible AI: Developing Explanations to Increase Human-AI Collaboration

被引:2
|
作者
De Brito Duarte, Regina [1 ]
机构
[1] Inst Super Tecn, Lisbon, Portugal
来源
关键词
Human-AI Interaction; Human-AI Collaboration; AI Trust; Explainable AI;
D O I
10.3233/FAIA230126
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most current XAI models are primarily designed to verify input-output relationships of AI models, without considering context. This objective may not always align with the goals of Human-AI collaboration, which aim to enhance team performance and establish appropriate levels of trust. Developing XAI models that can promote justified trust is therefore still a challenge in the AI field, but it is a crucial step towards responsible AI. The focus of this research is to develop an XAI model optimized for human-AI collaboration, with a specific goal of generating explanations that improve understanding of the AI system's limitations and increase warranted trust in it. To achieve this goal, a user experiment was conducted to analyze the effects of including explanations in the decision-making process on AI trust.
引用
收藏
页码:470 / 482
页数:13
相关论文
共 50 条
  • [21] Designing Transparency for Effective Human-AI Collaboration
    Voessing, Michael
    Kuehl, Niklas
    Lind, Matteo
    Satzger, Gerhard
    INFORMATION SYSTEMS FRONTIERS, 2022, 24 (03) : 877 - 895
  • [22] Human-AI Collaboration for the Detection of Deceptive Speech
    Tutul, Adullah Aman
    Chaspari, Theodora
    Levitan, Sarah Ita
    Hirschberg, Julia
    2023 11TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS, ACIIW, 2023,
  • [23] Robust Uncertainty Representation in Human-AI Collaboration
    Cassenti, Daniel N.
    Kaplan, Lance M.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS III, 2021, 11746
  • [24] Designing Transparency for Effective Human-AI Collaboration
    Michael Vössing
    Niklas Kühl
    Matteo Lind
    Gerhard Satzger
    Information Systems Frontiers, 2022, 24 : 877 - 895
  • [25] Synthesizing Explainable Behavior for Human-AI Collaboration
    Kambhampati, Subbarao
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1 - 2
  • [26] Supermind Ideator: How Scaffolding Human-AI Collaboration Can Increase Creativity
    Heyman, Jennifer L.
    Rick, Steven R.
    Giacomelli, Gianni
    Wen, Haoran
    Laubacher, Robert J.
    Taubenslag, Nancy
    Knicker, Max Sina
    Jeddi, Younes
    Ragupathy, Pranav
    Curhan, Jared
    Malone, Thomas W.
    PROCEEDINGS OF THE ACM COLLECTIVE INTELLIGENCE CONFERENCE, CI 2024, 2024, : 18 - 28
  • [27] Adaptive trust calibration for human-AI collaboration
    Okamura, Kazuo
    Yamada, Seiji
    PLOS ONE, 2020, 15 (02):
  • [28] Enhancing human-AI collaboration: The case of colonoscopy
    Introzzi, Luca
    Zonca, Joshua
    Cabitza, Federico
    Cherubini, Paolo
    Reverberi, Carlo
    DIGESTIVE AND LIVER DISEASE, 2024, 56 (07) : 1131 - 1139
  • [29] Towards Synergistic Human-AI Collaboration in Hybrid Decision-Making Systems
    Punzi, Clara
    Setzu, Mattia
    Pellungrini, Roberto
    Giannotti, Fosca
    Pedreschi, Dino
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II, 2025, 2134 : 268 - 275
  • [30] From explanations to human-AI co-evolution: Charting trajectories towards future user-centric AI
    Ziegler J.
    Donkers T.
    i-com, 2024, 23 (02) : 263 - 272