THE JUDICIAL DEMAND FOR EXPLAINABLE ARTIFICIAL INTELLIGENCE

被引:5
|
作者
Deeks, Ashley [1 ]
机构
[1] Univ Virginia, Law, Law Sch, Charlottesville, VA 22903 USA
关键词
COMMON-LAW; DECISION-MAKING; EXPLANATION; AUTOMATION; BIAS;
D O I
暂无
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
A recurrent concern about machine learning algorithms is that they operate as "black boxes," making it difficult to identify how and why the algorithms reach particular decisions, recommendations, or predictions. Yet judges are confronting machine learning algorithms with increasing frequency, including in criminal, administrative, and civil cases. This Essay argues that judges should demand explanations for these algorithmic outcomes. One way to address the "black box" problem is to design systems that explain how the algorithms reach their conclusions or predic lions. If and as judges demand these explanations, they will play a seminal role in shaping the nature and form of "explainable AI" (xAI). Using the tools of the common law, courts can develop what xAI should mean in different legal contexts. There are advantages to having courts to play this role: Judicial reasoning that builds from the bottom up, using case-by-case consideration of the facts to produce nuanced decisions, is a pragmatic way to develop rules for xAI. Further, courts are likely to stimulate the production of different forms of xAI that are responsive to distinct legal settings and audiences. More generally, we should favor the greater involvement of public actors in shaping xAI, which to date has largely been left in private hands.
引用
收藏
页码:1829 / 1850
页数:22
相关论文
共 50 条
  • [41] Fuzzy Networks for Explainable Artificial Intelligence
    Arabikhan, Farzad
    Gegov, Alexander
    Kaymak, Uzay
    Akbari, Negar
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 199 - 200
  • [42] Explainable Artificial Intelligence (XAI) in Insurance
    Owens, Emer
    Sheehan, Barry
    Mullins, Martin
    Cunneen, Martin
    Ressel, Juliane
    Castignani, German
    RISKS, 2022, 10 (12)
  • [43] Visualization of explainable artificial intelligence for GeoAI
    Roussel, Cedric
    FRONTIERS IN COMPUTER SCIENCE, 2024, 6
  • [44] Is explainable artificial intelligence intrinsically valuable?
    Colaner, Nathan
    AI & SOCIETY, 2022, 37 (01) : 231 - 238
  • [45] Explainable artificial intelligence for education and training
    Fiok, Krzysztof
    Farahani, Farzad, V
    Karwowski, Waldemar
    Ahram, Tareq
    JOURNAL OF DEFENSE MODELING AND SIMULATION-APPLICATIONS METHODOLOGY TECHNOLOGY-JDMS, 2022, 19 (02): : 133 - 144
  • [46] Explainable Artificial Intelligence in CyberSecurity: A Survey
    Capuano, Nicola
    Fenza, Giuseppe
    Loia, Vincenzo
    Stanzione, Claudio
    IEEE ACCESS, 2022, 10 : 93575 - 93600
  • [47] Explainable Artificial Intelligence as an Ethical Principle
    Gonzalez-Arencibia, Mario
    Ordonez-Erazo, Hugo
    Gonzalez-Sanabria, Juan-Sebastian
    INGENIERIA, 2024, 29 (02):
  • [48] Explainable Artificial Intelligence: Point and Counterpoint
    Knox, Andrew T.
    Khakoo, Yasmin
    Gombolay, Grace
    PEDIATRIC NEUROLOGY, 2023, 148 : 54 - 55
  • [49] Explainable Artificial Intelligence for Training and Tutoring
    Lane, H. Chad
    Core, Mark G.
    van Lent, Michael
    Solomon, Steve
    Gomboc, Dave
    ARTIFICIAL INTELLIGENCE IN EDUCATION: SUPPORTING LEARNING THROUGH INTELLIGENT AND SOCIALLY INFORMED TECHNOLOGY, 2005, 125 : 762 - 764
  • [50] A historical perspective of explainable Artificial Intelligence
    Confalonieri, Roberto
    Coba, Ludovik
    Wagner, Benedikt
    Besold, Tarek R.
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (01)