Learning When to Treat Business Processes: Prescriptive Process Monitoring with Causal Inference and Reinforcement Learning

被引:7
|
作者
Bozorgi, Zahra Dasht [1 ]
Dumas, Marlon [2 ]
La Rosa, Marcello [1 ]
Polyvyanyy, Artem [1 ]
Shoush, Mahmoud [2 ]
Teinemaa, Irene [1 ,2 ,3 ]
机构
[1] Univ Melbourne, Parkville, Vic 3010, Australia
[2] Univ Tartu, Narva Mnt 18, EE-51009 Tartu, Estonia
[3] DeepMind, London, England
基金
澳大利亚研究理事会; 欧洲研究理事会;
关键词
prescriptive process monitoring; causal inference; reinforcement learning;
D O I
10.1007/978-3-031-34560-9_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Increasing the success rate of a process, i.e. the percentage of cases that end in a positive outcome, is a recurrent process improvement goal. At runtime, there are often certain actions (a.k.a. treatments) that workers may execute to lift the probability that a case ends in a positive outcome. For example, in a loan origination process, a possible treatment is to issue multiple loan offers to increase the probability that the customer takes a loan. Each treatment has a cost. Thus, when defining policies for prescribing treatments to cases, managers need to consider the net gain of the treatments. Also, the effect of a treatment varies over time: treating a case earlier may be more effective than later in a case. This paper presents a prescriptive monitoring method that automates this decision-making task. The method combines causal inference and reinforcement learning to learn treatment policies that maximize the net gain. The method leverages a conformal prediction technique to speed up the convergence of the reinforcement learning mechanism by separating cases that are likely to end up in a positive or negative outcome, from uncertain cases. An evaluation on two real-life datasets shows that the proposed method outperforms a state-of-the-art baseline.
引用
收藏
页码:364 / 380
页数:17
相关论文
共 50 条
  • [21] A Survey on Causal Reinforcement Learning
    Zeng, Yan
    Cai, Ruichu
    Sun, Fuchun
    Huang, Libo
    Hao, Zhifeng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (04) : 5942 - 5962
  • [22] CAUSAL DISCOVERY WITH REINFORCEMENT LEARNING
    Huawei Noah's Ark Lab
    不详
    Int. Conf. Learn. Represent., ICLR,
  • [23] Causal inference multi-agent reinforcement learning for traffic signal control
    Yang, Shantian
    Yang, Bo
    Zeng, Zheng
    Kang, Zhongfeng
    INFORMATION FUSION, 2023, 94 : 243 - 256
  • [24] Reinforcement Learning or Active Inference?
    Friston, Karl J.
    Daunizeau, Jean
    Kiebel, Stefan J.
    PLOS ONE, 2009, 4 (07):
  • [25] Reinforcement learning for process Mining: Business process optimization with avoiding bottlenecks
    Soliman, Ghada
    Mostafa, Kareem
    Younis, Omar
    EGYPTIAN INFORMATICS JOURNAL, 2025, 29
  • [26] Optimizing Business Processes by Learning from Monitoring Results
    Sebu, Maria Laura
    Ciocarlie, Horia
    2016 IEEE 11TH INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2016, : 43 - 49
  • [27] Active learning of causal structures with deep reinforcement learning
    Amirinezhad, Amir
    Salehkaleybar, Saber
    Hashemi, Matin
    NEURAL NETWORKS, 2022, 154 : 22 - 30
  • [28] Tool Condition Monitoring in the Milling Process Using Deep Learning and Reinforcement Learning
    Kaliyannan, Devarajan
    Thangamuthu, Mohanraj
    Pradeep, Pavan
    Gnansekaran, Sakthivel
    Rakkiyannan, Jegadeeshwaran
    Pramanik, Alokesh
    JOURNAL OF SENSOR AND ACTUATOR NETWORKS, 2024, 13 (04)
  • [29] Reinforcement learning based resource allocation in business process management
    Huang, Zhengxing
    van der Aalst, W. M. P.
    Lu, Xudong
    Duan, Huilong
    DATA & KNOWLEDGE ENGINEERING, 2011, 70 (01) : 127 - 145
  • [30] A Primer on Deep Learning for Causal Inference
    Koch, Bernard J.
    Sainburg, Tim
    Geraldo Bastias, Pablo
    Jiang, Song
    Sun, Yizhou
    Foster, Jacob G.
    SOCIOLOGICAL METHODS & RESEARCH, 2024,