THE JUDICIAL DEMAND FOR EXPLAINABLE ARTIFICIAL INTELLIGENCE

被引:5
|
作者
Deeks, Ashley [1 ]
机构
[1] Univ Virginia, Law, Law Sch, Charlottesville, VA 22903 USA
关键词
COMMON-LAW; DECISION-MAKING; EXPLANATION; AUTOMATION; BIAS;
D O I
暂无
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
A recurrent concern about machine learning algorithms is that they operate as "black boxes," making it difficult to identify how and why the algorithms reach particular decisions, recommendations, or predictions. Yet judges are confronting machine learning algorithms with increasing frequency, including in criminal, administrative, and civil cases. This Essay argues that judges should demand explanations for these algorithmic outcomes. One way to address the "black box" problem is to design systems that explain how the algorithms reach their conclusions or predic lions. If and as judges demand these explanations, they will play a seminal role in shaping the nature and form of "explainable AI" (xAI). Using the tools of the common law, courts can develop what xAI should mean in different legal contexts. There are advantages to having courts to play this role: Judicial reasoning that builds from the bottom up, using case-by-case consideration of the facts to produce nuanced decisions, is a pragmatic way to develop rules for xAI. Further, courts are likely to stimulate the production of different forms of xAI that are responsive to distinct legal settings and audiences. More generally, we should favor the greater involvement of public actors in shaping xAI, which to date has largely been left in private hands.
引用
收藏
页码:1829 / 1850
页数:22
相关论文
共 50 条
  • [21] Explainable Artificial Intelligence for Combating Cyberbullying
    Tesfagergish, Senait Gebremichael
    Damasevicius, Robertas
    SOFT COMPUTING AND ITS ENGINEERING APPLICATIONS, PT 1, ICSOFTCOMP 2023, 2024, 2030 : 54 - 67
  • [22] Drug discovery with explainable artificial intelligence
    Jimenez-Luna, Jose
    Grisoni, Francesca
    Schneider, Gisbert
    NATURE MACHINE INTELLIGENCE, 2020, 2 (10) : 573 - 584
  • [23] Explainable and responsible artificial intelligence PREFACE
    Meske, Christian
    Abedin, Babak
    Klier, Mathias
    Rabhi, Fethi
    ELECTRONIC MARKETS, 2022, 32 (04) : 2103 - 2106
  • [24] A Survey on Explainable Artificial Intelligence for Cybersecurity
    Rjoub, Gaith
    Bentahar, Jamal
    Wahab, Omar Abdel
    Mizouni, Rabeb
    Song, Alyssa
    Cohen, Robin
    Otrok, Hadi
    Mourad, Azzam
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (04): : 5115 - 5140
  • [25] Scientific Exploration and Explainable Artificial Intelligence
    Carlos Zednik
    Hannes Boelsen
    Minds and Machines, 2022, 32 : 219 - 239
  • [26] Explainable artificial intelligence: an analytical review
    Angelov, Plamen P.
    Soares, Eduardo A.
    Jiang, Richard
    Arnold, Nicholas I.
    Atkinson, Peter M.
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (05)
  • [27] Blockchain for explainable and trustworthy artificial intelligence
    Nassar, Mohamed
    Salah, Khaled
    Rehman, Muhammad Habib ur
    Svetinovic, Davor
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 10 (01)
  • [28] Explainable Artificial Intelligence and Machine Learning
    Raunak, M. S.
    Kuhn, Rick
    COMPUTER, 2021, 54 (10) : 25 - 27
  • [29] Explainable artificial intelligence for digital forensics
    Hall, Stuart W.
    Sakzad, Amin
    Choo, Kim-Kwang Raymond
    WILEY INTERDISCIPLINARY REVIEWS: FORENSIC SCIENCE, 2022, 4 (02):
  • [30] From Explainable to Reliable Artificial Intelligence
    Narteni, Sara
    Ferretti, Melissa
    Orani, Vanessa
    Vaccari, Ivan
    Cambiaso, Enrico
    Mongelli, Maurizio
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION (CD-MAKE 2021), 2021, 12844 : 255 - 273