Inside algorithmic bureaucracy: Disentangling automated decision-making and good administration

被引:1
|
作者
Roehl, Ulrik [1 ]
Crompvoets, Joep [2 ]
机构
[1] Copenhagen Business Sch, Dept Digitalizat, Howitzvej 60, DK-2000 Frederiksberg, Denmark
[2] Katholieke Univ Leuven KU Leuven, Publ Governance Inst, Leuven, Belgium
关键词
Administrative capabilities; administrative decisions; algorithmic bureaucracy; automated decision-making; good administration; multiple case-study; STREET-LEVEL; E-GOVERNMENT; DISCRETION;
D O I
10.1177/09520767231197801
中图分类号
C93 [管理学]; D035 [国家行政管理]; D523 [行政管理]; D63 [国家行政管理];
学科分类号
12 ; 1201 ; 1202 ; 120202 ; 1204 ; 120401 ;
摘要
Public administrative bodies around the world are increasingly applying automated, administrative decision-making as underlying technologies such as machine learning mature. Such decision-making is a central element of emerging forms of algorithmic bureaucracies. With its direct exercise of public authority over individual citizens and firms, automated, administrative decision-making makes it particularly important to consider relations to values of good administration. Based on a multiple case-study, the article focuses on how empirical use of automated decision-making influences and transforms issues of good administration in four policy areas in Denmark: Business and social policy; labour market policy; agricultural policy; and tax policy. Supplementing emerging literature, the article exemplifies how public authorities struggle to apply automated decision-making in ways that support rather than undermine good administration. We identify six empirical relations of usage of automated, administrative decision-making and good administration: (I) Giving accurate and comprehensible reasons; (II) Informing addressees' expectations; (III) Combining material and algorithmic expertise; (IV) Achieving effective oversight; (V) Continuously ensuring quality; and (VI) Managing high complexity. Additionally, we pinpoint related key capabilities for administrative bodies in order to support good administration.
引用
收藏
页数:29
相关论文
共 50 条
  • [31] Principal Fairness for Human and Algorithmic Decision-Making
    Imai, Kosuke
    Jiang, Zhichao
    STATISTICAL SCIENCE, 2023, 38 (02) : 317 - 328
  • [32] Algorithmic Driven Decision-Making Systems in Education
    Ferrero, Federico
    Gewerc, Adriana
    2019 XIV LATIN AMERICAN CONFERENCE ON LEARNING TECHNOLOGIES (LACLO 2019), 2020, : 166 - 173
  • [33] The value of responsibility gaps in algorithmic decision-making
    Munch, Lauritz
    Mainz, Jakob
    Bjerring, Jens Christian
    ETHICS AND INFORMATION TECHNOLOGY, 2023, 25 (01)
  • [34] Fairness, Equality, and Power in Algorithmic Decision-Making
    Kasy, Maximilian
    Abebe, Rediet
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 576 - 586
  • [35] A RIGHT TO AN EXPLANATION OF ALGORITHMIC DECISION-MAKING IN CHINA
    Lin, Huanmin
    Wu, Hong
    HONG KONG LAW JOURNAL, 2022, 52 : 1163 - +
  • [36] Pushing the Limits of Fairness in Algorithmic Decision-Making
    Shah, Nisarg
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 7051 - 7056
  • [37] ALGORITHMIC STRUCTURING OF DIALOG DECISION-MAKING SYSTEMS
    ARAKSYAN, VV
    ENGINEERING CYBERNETICS, 1984, 22 (04): : 120 - 124
  • [38] The value of responsibility gaps in algorithmic decision-making
    Lauritz Munch
    Jakob Mainz
    Jens Christian Bjerring
    Ethics and Information Technology, 2023, 25
  • [39] On the Impact of Explanations on Understanding of Algorithmic Decision-Making
    Schmude, Timothee
    Koesten, Laura
    Moeller, Torsten
    Tschiatschek, Sebastian
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 959 - 970
  • [40] Contrastive Counterfactual Fairness in Algorithmic Decision-Making
    Mutlu, Ece Cigdem
    Yousefi, Niloofar
    Garibay, Ozlem Ozmen
    PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, : 499 - 507