How AI can learn from the law: putting humans in the loop only on appeal

被引:0
|
作者
I. Glenn Cohen
Boris Babic
Sara Gerke
Qiong Xia
Theodoros Evgeniou
Klaus Wertenbroch
机构
[1] Artificial Intelligence,The Petrie
[2] and the Law (PMAIL),Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, The Project on Precision Medicine
[3] Harvard Law School,undefined
[4] University of Toronto,undefined
[5] Penn State Dickinson Law,undefined
[6] INSEAD,undefined
[7] INSEAD,undefined
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
While the literature on putting a “human in the loop” in artificial intelligence (AI) and machine learning (ML) has grown significantly, limited attention has been paid to how human expertise ought to be combined with AI/ML judgments. This design question arises because of the ubiquity and quantity of algorithmic decisions being made today in the face of widespread public reluctance to forgo human expert judgment. To resolve this conflict, we propose that human expert judges be included via appeals processes for review of algorithmic decisions. Thus, the human intervenes only in a limited number of cases and only after an initial AI/ML judgment has been made. Based on an analogy with appellate processes in judiciary decision-making, we argue that this is, in many respects, a more efficient way to divide the labor between a human and a machine. Human reviewers can add more nuanced clinical, moral, or legal reasoning, and they can consider case-specific information that is not easily quantified and, as such, not available to the AI/ML at an initial stage. In doing so, the human can serve as a crucial error correction check on the AI/ML, while retaining much of the efficiency of AI/ML’s use in the decision-making process. In this paper, we develop these widely applicable arguments while focusing primarily on examples from the use of AI/ML in medicine, including organ allocation, fertility care, and hospital readmission.
引用
收藏
相关论文
共 50 条
  • [31] What can we learn from rodents about prolactin in humans?
    Ben-Jonathan, Nira
    LaPensee, Christopher R.
    LaPensee, Elizabeth W.
    ENDOCRINE REVIEWS, 2008, 29 (01) : 1 - 41
  • [32] The History of Otology and What We Can Learn from It: Putting Historical Research in Context
    Krishnan, Pavan S.
    Andresen, Nicholas S.
    Ward, Bryan K.
    OTOLOGY & NEUROTOLOGY, 2022, 43 (07) : 723 - 725
  • [33] Sex differences in how inflammation affects behavior: What we can learn from experimental inflammatory models in humans
    Lasselin, Julie
    Lekander, Mats
    Axelsson, John
    Karshikoff, Bianka
    FRONTIERS IN NEUROENDOCRINOLOGY, 2018, 50 : 91 - 106
  • [34] From meningococcus one can only survive, but how?
    D'Angelo, Gabriella
    Marseglia, Lucia
    Gitto, Eloisa
    MINERVA PEDIATRICS, 2022, 74 (06): : 796 - 797
  • [35] Automating Analytics: How to learn metadata such that our buildings can learn from us
    Ploennigs, Joern
    2016 IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION AND NETWORKING (SECON WORKSHOPS), 2016,
  • [36] The AI inherited bias effect: How humans can mimic artificial intelligence biases
    Vicente, Lucia
    Matute, Helena
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2024, 59 : 36 - 36
  • [38] What Can AI Learn from Psychology and When Can AI Neglect it? An Ontogeny-Free Study on the Cognition of Numerosity
    Xu, Yingjin
    FUDAN JOURNAL OF THE HUMANITIES AND SOCIAL SCIENCES, 2023, 16 (04) : 495 - 513
  • [39] What Can AI Learn from Psychology and When Can AI Neglect it?An Ontogeny-Free Study on the Cognition of Numerosity
    Yingjin Xu
    Fudan Journal of the Humanities and Social Sciences, 2023, 16 : 495 - 513
  • [40] Can IC test learn from how a tester is tested
    Rajsuman, R
    INTERNATIONAL TEST CONFERENCE 2002, PROCEEDINGS, 2002, : 1186 - 1186