Dynamic pricing of regulated field services using reinforcement learning

被引:0
|
作者
Mandania, Rupal [1 ]
Oliveira, Fernando. S. [2 ]
机构
[1] Loughborough Univ, Sch Business & Econ, Loughborough, Leicestershire, England
[2] Univ Bradford, Sch Management, Bradford, England
关键词
Dynamic pricing; quality management; regulation; reinforcement learning; RESOURCE FLEXIBILITY; MANAGEMENT; PRODUCTS; CAPACITY;
D O I
10.1080/24725854.2022.2151672
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Resource flexibility and dynamic pricing are effective strategies in mitigating uncertainties in production systems; however, they have yet to be explored in relation to the improvement of field operations services. We investigate the value of dynamic pricing and flexible allocation of resources in the field service operations of a regulated monopoly providing two services: installations (paid-for) and maintenance (free). We study the conditions under which the company can improve service quality and the profitability of field services by introducing dynamic pricing for installations and the joint management of the resources allocated to paid-for (with a relatively stationary demand) and free (with seasonal demand) services when there is an interaction between quality constraints (lead time) and the flexibility of resources (overtime workers at extra cost). We formalize this problem as a contextual multi-armed bandit problem to make pricing decisions for the installation services. A bandit algorithm can find the near-optimal policy for joint management of the two services independently of the shape of the unobservable demand function. The results show that (i) dynamic pricing and resource management increase profitability; (ii) regulation of the service window is needed to maintain quality; (iii) under certain conditions, dynamic pricing of installation services can decrease the maintenance lead time; (iv) underestimation of demand is more detrimental to profit contribution than overestimation.
引用
收藏
页码:1022 / 1034
页数:13
相关论文
共 50 条
  • [1] Dynamic pricing policies for interdependent perishable products or services using reinforcement learning
    Rana, Rupal
    Oliveira, Fernando S.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2015, 42 (01) : 426 - 436
  • [2] Dynamic Pricing for Differentiated PEV Charging Services Using Deep Reinforcement Learning
    Abdalrahman, Ahmed
    Zhuang, Weihua
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) : 1415 - 1427
  • [3] Dynamic pricing and reinforcement learning
    Carvalho, AX
    Puterman, ML
    [J]. PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 2916 - 2921
  • [4] Dynamic pricing under competition using reinforcement learning
    Alexander Kastius
    Rainer Schlosser
    [J]. Journal of Revenue and Pricing Management, 2022, 21 : 50 - 63
  • [5] Dynamic pricing under competition using reinforcement learning
    Kastius, Alexander
    Schlosser, Rainer
    [J]. JOURNAL OF REVENUE AND PRICING MANAGEMENT, 2022, 21 (01) : 50 - 63
  • [6] Dynamic Pricing by Multiagent Reinforcement Learning
    Han, Wei
    Liu, Lingbo
    Zheng, Huaili
    [J]. PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON ELECTRONIC COMMERCE AND SECURITY, 2008, : 226 - 229
  • [7] Reinforcement Learning for Fair Dynamic Pricing
    Maestre, Roberto
    Duque, Juan
    Rubio, Alberto
    Arevalo, Juan
    [J]. INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, 2019, 868 : 120 - 135
  • [8] Dynamic air ticket pricing using reinforcement learning method
    Gao, Jinmin
    Le, Meilong
    Fang, Yuan
    [J]. RAIRO-OPERATIONS RESEARCH, 2022, 56 (04) : 2475 - 2493
  • [9] Application of Reinforcement Learning in Dynamic Pricing Algorithms
    Wang Jintian
    Zhou Lei
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 419 - 423
  • [10] Dynamic Pricing for Smart Grid with Reinforcement Learning
    Kim, Byung-Gook
    Zhang, Yu
    van der Schaar, Mihaela
    Lee, Jang-Won
    [J]. 2014 IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2014, : 640 - 645