An adaptive work distribution mechanism based on reinforcement learning

被引:18
|
作者
Huang, Zhengxing [1 ,2 ]
van der Aalst, W. M. P. [1 ]
Lu, Xudong [2 ]
Duan, Huilong [2 ]
机构
[1] Eindhoven Univ Technol, NL-5600 MB Eindhoven, Netherlands
[2] Zhejiang Univ, Coll Biomed Engn & Instrument Sci, Key Lab Biomed Engn, Minist Educ, Hangzhou, Zhejiang, Peoples R China
关键词
Work distribution; Business process; Process condition; Reinforcement learning; Rough set theory;
D O I
10.1016/j.eswa.2010.04.091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Work distribution, as an integral part of business process management, is more widely acknowledged by its importance for Process-aware Information Systems. Although there are emerging a wide variety of mechanisms to support work distribution, they less concern performance considerations and cannot balance work distribution requirements and process performance within the change of process conditions. This paper presents an adaptive work distribution mechanism based on reinforcement learning. It considers process performance goals, and then can learn, reason suitable work distribution policies within the change of process conditions. Also, learning-based simulation experiment for addressing work distribution problems of business process management is introduced. The experiment results show that our mechanism outperforms reasonable heuristic or hand-coded approaches to satisfy process performance goals and is feasible to improve current state of business process management. (C) 2010 Elsevier Ltd. All rights reserved.
引用
收藏
页码:7533 / 7541
页数:9
相关论文
共 50 条
  • [1] An Agent-based Self-Adaptive Mechanism with Reinforcement Learning
    Yu, Danni
    Li, Qingshan
    Wang, Lu
    Lin, Yishuai
    IEEE 39TH ANNUAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE WORKSHOPS (COMPSAC 2015), VOL 3, 2015, : 582 - 585
  • [2] Dynamic Adaptive Checkpoint Mechanism for Streaming Applications Based on Reinforcement Learning
    Zhang, Zhan
    Liu, Tianming
    Shu, Yanjun
    Chen, Siyuan
    Liu, Xian
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 538 - 545
  • [3] Reinforcement learning based adaptive metaheuristics
    Tessari, Michele
    Iacca, Giovanni
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 1854 - 1861
  • [4] An Adaptive Authentication Based on Reinforcement Learning
    Cui, Ziqi
    Zhao, Yongxiang
    Li, Chunxi
    Zuo, Qi
    Zhang, Haipeng
    2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2019,
  • [5] Adaptive immunity based reinforcement learning
    Ito, Jungo
    Nakano, Kazushi
    Sakurama, Kazunori
    Hosokawa, Shu
    ARTIFICIAL LIFE AND ROBOTICS, 2008, 13 (01) : 188 - 193
  • [6] Power Distribution using Adaptive Reinforcement Learning Technique
    Patil, Pramod D.
    Kulkarni, Parag
    Aradhva, Rohan
    Lalwani, Govinda
    2015 INTERNATIONAL CONFERENCE ON ENERGY SYSTEMS AND APPLICATIONS, 2015, : 270 - 274
  • [7] A Reinforcement Learning-Based Adaptive Learning System
    Shawky, Doaa
    Badawi, Ashraf
    INTERNATIONAL CONFERENCE ON ADVANCED MACHINE LEARNING TECHNOLOGIES AND APPLICATIONS (AMLTA2018), 2018, 723 : 221 - 231
  • [8] Adaptive reinforcement learning based on degree of learning progress
    Mimura, Akihiro
    Kato, Shohei
    PROCEEDINGS OF THE SEVENTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 17TH '12), 2012, : 959 - 962
  • [9] DREAM: Adaptive Reinforcement Learning based on Attention Mechanism for Temporal Knowledge Graph Reasoning
    Zheng, Shangfei
    Yin, Hongzhi
    Chen, Tong
    Quoc Viet Hung Nguyen
    Chen, Wei
    Zhao, Lei
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 1578 - 1588
  • [10] Adaptive Modeling of HRTFs Based on Reinforcement Learning
    Morioka, Shuhei
    Nambu, Isao
    Yano, Shohei
    Hokari, Haruhide
    Wada, Yasuhiro
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT IV, 2012, 7666 : 423 - 430