Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

被引:4
|
作者
Zerilli J. [1 ]
Knott A. [2 ]
Maclaurin J. [1 ]
Gavaghan C. [3 ]
机构
[1] Department of Philosophy, University of Otago, Dunedin
[2] Department of Computer Science, University of Otago, Dunedin
[3] Faculty of Law, University of Otago, Dunedin
关键词
Algorithmic decision-making; Explainable AI; Intentional stance; Transparency;
D O I
10.1007/s13347-018-0330-6
中图分类号
学科分类号
摘要
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool. © 2018, Springer Nature B.V.
引用
收藏
页码:661 / 683
页数:22
相关论文
共 50 条
  • [21] Algorithmic legitimacy in clinical decision-making
    Sune Holm
    Ethics and Information Technology, 2023, 25
  • [22] Gender discrimination in algorithmic decision-making
    Andreeva, Galina
    Matuszyk, Anna
    2ND INTERNATIONAL CONFERENCE ON ADVANCED RESEARCH METHODS AND ANALYTICS (CARMA 2018), 2018, : 251 - 251
  • [23] Understanding the Impact of Transparency on Algorithmic Decision Making Legitimacy
    Goad, David
    Gal, Uri
    LIVING WITH MONSTERS?: SOCIAL IMPLICATIONS OF ALGORITHMIC PHENOMENA, HYBRID AGENCY, AND THE PERFORMATIVITY OF TECHNOLOGY, 2018, 543 : 64 - 79
  • [24] Articulation and transparency of decision-making by human research ethics committees
    Davies, Grant
    Gillam, Lynn
    MONASH BIOETHICS REVIEW, 2007, 26 (1-2) : 46 - 56
  • [25] Transparency in public health decision-making
    Garcia-Altes, Anna
    Argimon, Josep M.
    GACETA SANITARIA, 2016, 30 : 9 - 13
  • [26] Transparency in the WTO's Decision-Making
    Delimatsis, Panagiotis
    LEIDEN JOURNAL OF INTERNATIONAL LAW, 2014, 27 (03) : 701 - 726
  • [27] Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making
    Grimmelikhuijsen, Stephan
    PUBLIC ADMINISTRATION REVIEW, 2023, 83 (02) : 241 - 262
  • [28] EMA, transparency, and decision-making process
    Rosen, Mans
    LANCET, 2013, 382 (9886): : 26 - 27
  • [29] Complementarities between algorithmic and human decision-making: The case of antibiotic prescribing
    Ribers, Michael Allan
    Ullrich, Hannes
    QME-QUANTITATIVE MARKETING AND ECONOMICS, 2024, : 445 - 483
  • [30] THE "BLACK BOX" OF JUDICIAL DECISION-MAKING: BETWEEN HUMAN AND ALGORITHMIC JUDGEMENT
    Arduini, Sonia
    BIOLAW JOURNAL-RIVISTA DI BIODIRITTO, 2021, (02): : 453 - 470