On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness

被引:25
|
作者
Kaestner, Lena [1 ]
Langer, Markus [2 ]
Lazar, Veronika [2 ]
Schomaecker, Astrid [1 ]
Speith, Timo [1 ,3 ]
Sterz, Sarah [1 ,3 ]
机构
[1] Saarland Univ, Inst Philosophy, Saarbrucken, Germany
[2] Saarland Univ, Dept Psychol, Saarbrucken, Germany
[3] Saarland Univ, Dept Comp Sci, Saarbrucken, Germany
关键词
Explainability; XAI; Trust; Trustworthiness; Requirements; NFR; ARTIFICIAL-INTELLIGENCE; BLACK-BOX; AUTOMATION; EXPLANATIONS; EXPERIENCE; SYSTEM; INFORMATION; JUSTICE;
D O I
10.1109/REW53955.2021.00031
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Recently, requirements for the explainability of software systems have gained prominence. One of the primary motivators for such requirements is that explainability is expected to facilitate stakeholders' trust in a system. Although this seems intuitively appealing, recent psychological studies indicate that explanations do not necessarily facilitate trust. Thus, explainability requirements might not be suitable for promoting trust. One way to accommodate this finding is, we suggest, to focus on trustworthiness instead of trust. While these two may come apart, we ideally want both: a trustworthy system and the stakeholder's trust. In this paper, we argue that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness - and that a system's explainability can crucially contribute to its trustworthiness.
引用
收藏
页码:169 / 175
页数:7
相关论文
共 50 条