Multitasking, Multiarmed Bandits, and the Italian Judiciary

被引:20
|
作者
Bray, Robert L. [1 ]
Coviello, Decio [2 ]
Ichino, Andrea [3 ,4 ]
Persico, Nicola [1 ]
机构
[1] Northwestern Univ, Kellogg Sch Management, Evanston, IL 60208 USA
[2] HEC Montreal, Montreal, PQ H3T 2A7, Canada
[3] European Univ Inst, I-50014 Fiesole, FI, Italy
[4] Univ Bologna, I-40126 Bologna, Italy
关键词
multitasking; multiarmed bandits; field experiment; production scheduling; Italian judiciary;
D O I
10.1287/msom.2016.0586
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
We model how a judge schedules cases as a multiarmed bandit problem. The model indicates that a first-in-first-out (FIFO) scheduling policy is optimal when the case completion hazard rate function is monotonic. But there are two ways to implement FIFO in this context: at the hearing level or at the case level. Our model indicates that the former policy, prioritizing the oldest hearing, is optimal when the case completion hazard rate function decreases, and the latter policy, prioritizing the oldest case, is optimal when the case completion hazard rate function increases. This result convinced six judges of the Roman Labor Court of Appeals-a court that exhibits increasing hazard rates-to switch from hearing-level FIFO to case-level FIFO. Tracking these judges for eight years, we estimate that our intervention decreased the average case duration by 12% and the probability of a decision being appealed to the Italian supreme court by 3.8%, relative to a 44-judge control sample.
引用
收藏
页码:545 / 558
页数:14
相关论文
共 50 条
  • [1] Distributed Multiarmed Bandits
    Zhu, Jingxuan
    Liu, Ji
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (05) : 3025 - 3040
  • [2] MULTIARMED BANDITS WITH SIMPLE ARMS
    KEENER, R
    [J]. ADVANCES IN APPLIED MATHEMATICS, 1986, 7 (02) : 199 - 204
  • [3] Independently expiring multiarmed bandits
    Righter, R
    Shanthikumar, JG
    [J]. PROBABILITY IN THE ENGINEERING AND INFORMATIONAL SCIENCES, 1998, 12 (04) : 453 - 468
  • [4] CONTINUOUS MULTIARMED BANDITS AND MULTIPARAMETER PROCESSES
    MANDELBAUM, A
    [J]. ANNALS OF PROBABILITY, 1987, 15 (04): : 1527 - 1556
  • [5] Decentralized Learning for Multiplayer Multiarmed Bandits
    Kalathil, Dileep
    Nayyar, Naumaan
    Jain, Rahul
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2014, 60 (04) : 2331 - 2345
  • [6] A Common Value Experimentation with Multiarmed Bandits
    Gao, Xiujuan
    Liang, Hao
    Wang, Tong
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2018, 2018
  • [7] DISCRETE MULTIARMED BANDITS AND MULTIPARAMETER PROCESSES
    MANDELBAUM, A
    [J]. PROBABILITY THEORY AND RELATED FIELDS, 1986, 71 (01) : 129 - 147
  • [8] Reward Teaching for Federated Multiarmed Bandits
    Shi, Chengshuai
    Xiong, Wei
    Shen, Cong
    Yang, Jing
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 4407 - 4422
  • [9] Nonstochastic Multiarmed Bandits with Unrestricted Delays
    Thune, Tobias Sommer
    Cesa-Bianchi, Nicolo
    Seldin, Yevgeny
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] Open Problem: Risk of Ruin in Multiarmed Bandits
    Perotto, Filipo Studzinski
    Bourgais, Mathieu
    Vercouter, Laurent
    da Silva, Bruno Castro
    [J]. CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99