Explanation in AI and law: Past, present and future

被引:69
|
作者
Atkinson, Katie [1 ]
Bench-Capon, Trevor [1 ]
Bollegala, Danushka [1 ]
机构
[1] Univ Liverpool, Dept Comp Sci, Liverpool, Merseyside, England
关键词
Explainable AI; AI and law; Computational models of argument; Case-based reasoning; ARTIFICIAL-INTELLIGENCE; ARGUMENTATION; CONSTRUCTION; DIMENSIONS; PERSUASION; FRAMEWORK; HAYASHI; VALUES; MODEL;
D O I
10.1016/j.artint.2020.103387
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explanation has been a central feature of AI systems for legal reasoning since their inception. Recently, the topic of explanation of decisions has taken on a new urgency, throughout AI in general, with the increasing deployment of AI tools and the need for lay users to be able to place trust in the decisions that the support tools are recommending. This paper provides a comprehensive review of the variety of techniques for explanation that have been developed in AI and Law. We summarise the early contributions and how these have since developed. We describe a number of notable current methods for automated explanation of legal reasoning and we also highlight gaps that must be addressed by future systems to ensure that accurate, trustworthy, unbiased decision support can be provided to legal professionals. We believe that insights from AI and Law, where explanation has long been a concern, may provide useful pointers for future development of explainable AI. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:21
相关论文
共 50 条