ON MULTI-ARMED BANDITS AND DEBT COLLECTION

被引:0
|
作者
Czekaj, Lukasz [1 ]
Biegus, Tomasz [1 ]
Kitlowski, Robert [1 ]
Tomasik, Pawel [2 ]
机构
[1] Szybkie Skladki Sp Zoo, Innowacyjna 1, Suwalki, Poland
[2] PICTEC, Al Zwyciestwa 96-98,Bud 4,Lok B3-06, PL-81451 Gdynia, Poland
关键词
Finance; Marketing; decision support systems; statistical analysis; optimization;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this paper we consider minimisation of payment arrears in non-governmental organizations on the example of sport clubs. In the presented approach we focus on the optimisation of contact with customers, to motivate them for timely payments. We use multi-armed bandits to model the impact of different ways of contact on payment arrears. The method allows for efficient balancing between exploration and exploitation during runtime even for limited opportunity of customer contact. We present the architecture of the enterprise system, describe the simulations used for optimization and evaluation of the algorithms and provide design considerations. We discuss differences between considered problems and classical MABs. We propose batch learning and product arms as a way for improvement of the model performance.
引用
收藏
页码:137 / 141
页数:5
相关论文
共 50 条
  • [41] Multiplayer Modeling via Multi-Armed Bandits
    Gray, Robert C.
    Zhu, Jichen
    Ontanon, Santiago
    2021 IEEE CONFERENCE ON GAMES (COG), 2021, : 695 - 702
  • [42] Survey on Applications of Multi-Armed and Contextual Bandits
    Bouneffouf, Djallel
    Rish, Irina
    Aggarwal, Charu
    2020 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2020,
  • [43] On Interruptible Pure Exploration in Multi-Armed Bandits
    Shleyfman, Alexander
    Komenda, Antonin
    Domshlak, Carmel
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 3592 - 3598
  • [44] Thompson Sampling for Budgeted Multi-armed Bandits
    Xia, Yingce
    Li, Haifang
    Qin, Tao
    Yu, Nenghai
    Liu, Tie-Yan
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3960 - 3966
  • [45] Quantum Exploration Algorithms for Multi-Armed Bandits
    Wang, Daochen
    You, Xuchen
    Li, Tongyang
    Childs, Andrew M.
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10102 - 10110
  • [46] Global Multi-armed Bandits with Holder Continuity
    Atan, Onur
    Tekin, Cem
    van der Schaar, Mihaela
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 38, 2015, 38 : 28 - 36
  • [47] Multi-armed bandits with censored consumption of resources
    Bengs, Viktor
    Huellermeier, Eyke
    MACHINE LEARNING, 2023, 112 (01) : 217 - 240
  • [48] Multi-armed linear bandits with latent biases
    Kang, Qiyu
    Tay, Wee Peng
    She, Rui
    Wang, Sijie
    Liu, Xiaoqian
    Yang, Yuan-Rui
    INFORMATION SCIENCES, 2024, 660
  • [49] Combinatorial Pure Exploration of Multi-Armed Bandits
    Chen, Shouyuan
    Lin, Tian
    King, Irwin
    Lyu, Michael R.
    Chen, Wei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [50] Book Announcement: \Introduction to Multi-Armed Bandits"
    Slivkins, Aleksandrs
    ACM SIGECOM EXCHANGES, 2020, 18 (01) : 28 - 30