Thresholding Bandits with Augmented UCB

被引:0
|
作者
Mukherjee, Subhojyoti [1 ]
Purushothama, Naveen Kolar [2 ]
Sudarsanam, Nandan [3 ]
Ravindran, Balaraman [1 ]
机构
[1] Indian Inst Technol Madras, Dept Comp Sci & Engn, Chennai, Tamil Nadu, India
[2] Indian Inst Technol Madras, Dept Elect Engn, Chennai, Tamil Nadu, India
[3] Indian Inst Technol Madras, Dept Management Studies, Chennai, Tamil Nadu, India
关键词
MULTIARMED BANDIT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we propose the Augmented-UCB (AugUCB) algorithm for a fixed-budget version of the thresholding bandit problem (TBP), where the objective is to identify a set of arms whose quality is above a threshold. A key feature of AugUCB is that it uses both mean and variance estimates to eliminate arms that have been sufficiently explored; to the best of our knowledge this is the first algorithm to employ such an approach for the considered TBP. Theoretically, we obtain an upper bound on the loss (probability of mis-classification) incurred by AugUCB. Although UCBEV in literature provides a better guarantee, it is important to emphasize that UCBEV has access to problem complexity (whose computation requires arms' mean and variances), and hence is not realistic in practice; this is in contrast to AugUCB whose implementation does not require any such complexity inputs. We conduct extensive simulation experiments to validate the performance of AugUCB. Through our simulation work, we establish that AugUCB, owing to its utilization of variance estimates, performs significantly better than the state-of-the-art APT, CSAR and other non variance-based algorithms.
引用
收藏
页码:2515 / 2521
页数:7
相关论文
共 50 条
  • [21] Roving bandits and stationary bandits
    Lee, S
    FORBES, 1998, 161 (09): : 149 - +
  • [22] 'BANDITS, BANDITS' - GILLIAM,T
    CHION, M
    CAHIERS DU CINEMA, 1982, (336): : 50 - 51
  • [23] Reinforcement Learning Augmented Asymptotically Optimal Index Policy for Finite-Horizon Restless Bandits
    Xiong, Guojun
    Li, Jian
    Singh, Rahul
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8726 - 8734
  • [24] KL-UCB-Switch: Optimal Regret Bounds for Stochastic Bandits from Both a Distribution-Dependent and a Distribution-Free Viewpoints
    Garivier, Aurélien
    Hadiji, Hédi
    Ménard, Pierre
    Stoltz, Gilles
    Journal of Machine Learning Research, 2022, 23
  • [25] KL-UCB-Switch: Optimal Regret Bounds for Stochastic Bandits from Both a Distribution-Dependent and a Distribution-Free Viewpoints
    Garivier, Aurelien
    Hadiji, Hedi
    Menard, Pierre
    Stoltz, Gilles
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23 : 1 - 66
  • [26] The characteristics of umbilical cord blood (UCB) and UCB transplantation
    Kasai, M
    Masauzi, N
    SEMINARS IN THROMBOSIS AND HEMOSTASIS, 1998, 24 (05): : 491 - 495
  • [27] Reducing Dueling Bandits to Cardinal Bandits
    Ailon, Nir
    Karnin, Zohar
    Joachims, Thorsten
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 856 - 864
  • [28] Bandits
    不详
    HISTOIRE, 1999, (233): : 86 - 86
  • [29] BANDITS
    BILLACOI.F
    ANNALES-ECONOMIES SOCIETES CIVILISATIONS, 1973, 28 (05): : 1160 - 1162
  • [30] 'Bandits'
    Bourget, JL
    POSITIF, 2002, (491): : 45 - 46