机构:
MIT, Inst Data Syst & Soc, Dept Civil & Environm Engn, Cambridge, MA 02139 USA
MIT, Operat Res Ctr, Cambridge, MA 02139 USAMIT, Inst Data Syst & Soc, Dept Civil & Environm Engn, Cambridge, MA 02139 USA
Simchi-Levi, David
[1
,2
]
Sun, Rui
论文数: 0引用数: 0
h-index: 0
机构:
MIT, Inst Data Syst & Soc, Cambridge, MA 02139 USAMIT, Inst Data Syst & Soc, Dept Civil & Environm Engn, Cambridge, MA 02139 USA
Sun, Rui
[3
]
Wang, Xinshang
论文数: 0引用数: 0
h-index: 0
机构:
Alibaba Grp US, San Mateo, CA 94402 USA
Shanghai Jiao Tong Univ, Antai Coll Econ & Management, Shanghai 200240, Peoples R ChinaMIT, Inst Data Syst & Soc, Dept Civil & Environm Engn, Cambridge, MA 02139 USA
Wang, Xinshang
[4
,5
]
机构:
[1] MIT, Inst Data Syst & Soc, Dept Civil & Environm Engn, Cambridge, MA 02139 USA
[2] MIT, Operat Res Ctr, Cambridge, MA 02139 USA
[3] MIT, Inst Data Syst & Soc, Cambridge, MA 02139 USA
[4] Alibaba Grp US, San Mateo, CA 94402 USA
[5] Shanghai Jiao Tong Univ, Antai Coll Econ & Management, Shanghai 200240, Peoples R China
We study in this paper an online matching problem where a central platform needs to match a number of limited resources to different groups of users that arrive sequentially over time. The reward of each matching option depends on both the type of resource and the time period the user arrives. The matching rewards are assumed to be unknown but drawn from probability distributions that are known a priori. The platform then needs to learn the true rewards online based on real-time observations of the matching results. The goal of the central platform is to maximize the total reward from all of the matchings without violating the resource capacity constraints. We formulate this matching problem with Bayesian rewards as a Markovian multiarmed bandit problem with budget constraints, where each arm corresponds to a pair of a resources and a time period. We devise our algorithm by first finding policies for each single arm separately via a relaxed linear program and then "assembling" these policies together through judicious selection criteria and well-designed pulling orders. We prove that the expected reward of our algorithm is at least 1 2 ( 2 � 1) of the expected reward of an optimal algorithm.