Explainable AI and Law: An Evidential Survey

被引:0
|
作者
Karen McGregor Richmond
Satya M. Muddamsetty
Thomas Gammeltoft-Hansen
Henrik Palmer Olsen
Thomas B. Moeslund
机构
[1] Copenhagen University,Faculty of Law
[2] Aalborg University,Department of Architecture, Design, and Media Technology (CREATE), Visual Analysis and Perception Lab
来源
Digital Society | 2024年 / 3卷 / 1期
关键词
XAI; Legal reasoning; Legal logics; Explainability; Artificial intelligence; Evidence;
D O I
10.1007/s44206-023-00081-z
中图分类号
学科分类号
摘要
Decisions made by legal adjudicators and administrative decision-makers often found upon a reservoir of stored experiences, from which is drawn a tacit body of expert knowledge. Such expertise may be implicit and opaque, even to the decision-makers themselves, and generates obstacles when implementing AI for automated decision-making tasks within the legal field, since, to the extent that AI-powered decision-making tools must found upon a stock of domain expertise, opacities may proliferate. This raises particular issues within the legal domain, which requires a high level of accountability, thus transparency. This requires enhanced explainability, which entails that a heterogeneous body of stakeholders understand the mechanism underlying the algorithm to the extent that an explanation can be furnished. However, the “black-box” nature of some AI variants, such as deep learning, remains unresolved, and many machine decisions therefore remain poorly understood. This survey paper, based upon a unique interdisciplinary collaboration between legal and AI experts, provides a review of the explainability spectrum, as informed by a systematic survey of relevant research papers, and categorises the results. The article establishes a novel taxonomy, linking the differing forms of legal inference at play within particular legal sub-domains to specific forms of algorithmic decision-making. The diverse categories demonstrate different dimensions in explainable AI (XAI) research. Thus, the survey departs from the preceding monolithic approach to legal reasoning and decision-making by incorporating heterogeneity in legal logics: a feature which requires elaboration, and should be accounted for when designing AI-driven decision-making systems for the legal field. It is thereby hoped that administrative decision-makers, court adjudicators, researchers, and practitioners can gain unique insights into explainability, and utilise the survey as the basis for further research within the field.
引用
收藏
相关论文
共 50 条
  • [1] Survey of Explainable AI Techniques in Healthcare
    Chaddad, Ahmad
    Peng, Jihao
    Xu, Jian
    Bouridane, Ahmed
    [J]. SENSORS, 2023, 23 (02)
  • [2] An Interrogative Survey of Explainable AI in Manufacturing
    Alexander, Zoe
    Chau, Duen Horng
    Saldana, Christopher
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (05) : 7069 - 7081
  • [3] Explainable AI for the metaverse: A Short Survey
    Selvi, Chemmalar G.
    Yenduri, Gokul
    Srivastava, Gautam
    Ramalingam, M.
    Reddy, Dasaradharami K.
    Uzair, Muhammad
    Gadekallu, Thippa Reddy
    [J]. 2023 INTERNATIONAL CONFERENCE ON INTELLIGENT METAVERSE TECHNOLOGIES & APPLICATIONS, IMETA, 2023, : 182 - 187
  • [4] A call for more explainable AI in law enforcement
    Matulionyte, Rita
    Hanif, Ambreen
    [J]. 2021 IEEE 25TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING CONFERENCE WORKSHOPS (EDOCW 2021), 2021, : 75 - 80
  • [5] Privacy-preserving explainable AI: a survey
    Thanh Tam NGUYEN
    Thanh Trung HUYNH
    Zhao REN
    Thanh Toan NGUYEN
    Phi Le NGUYEN
    Hongzhi YIN
    Quoc Viet Hung NGUYEN
    [J]. Science China(Information Sciences)., 2025, 68 (01) - 56
  • [6] A Critical Survey on Fairness Benefits of Explainable AI
    Deck, Luca
    Schoeffer, Jakob
    De-Arteaga, Maria
    Kuehl, Niklas
    [J]. PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, ACM FACCT 2024, 2024, : 1579 - 1595
  • [7] Survey on Explainable AI: Techniques, challenges and open issues
    Abusitta, Adel
    Li, Miles Q.
    Fung, Benjamin C. M.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [8] A Survey of the State of Explainable AI for Natural Language Processing
    Danilevsky, Marina
    Qian, Kun
    Aharonov, Ranit
    Katsis, Yannis
    Kawas, Ban
    Sen, Prithviraj
    [J]. 1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), 2020, : 447 - 459
  • [9] Survey on ontology-based explainable AI in manufacturing
    Naqvi, Muhammad Raza
    Elmhadhbi, Linda
    Sarkar, Arkopaul
    Archimede, Bernard
    Karray, Mohamed Hedi
    [J]. JOURNAL OF INTELLIGENT MANUFACTURING, 2024, 35 (08) : 3605 - 3627
  • [10] Explainable AI
    Veerappa, Manjunatha
    Rinzivillo, Salvo
    [J]. ERCIM NEWS, 2023, (134):