Constrained regret minimization for multi-criterion multi-armed bandits

被引:1
|
作者
Kagrecha, Anmol [1 ]
Nair, Jayakrishnan [2 ]
Jagannathan, Krishna [3 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] Indian Inst Technol, Mumbai, India
[3] IIT Madras, Chennai, India
关键词
Multi-criterion multi-armed bandits; Constrained bandits; Regret minimization;
D O I
10.1007/s10994-022-06291-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider a stochastic multi-armed bandit setting and study the problem of constrained regret minimization over a given time horizon. Each arm is associated with an unknown, possibly multi-dimensional distribution, and the merit of an arm is determined by several, possibly conflicting attributes. The aim is to optimize a "primary' attribute subject to user provided constraints on other "secondary' attributes. We assume that the attributes can be estimated using samples from the arms' distributions, and that the estimators enjoy suitable concentration properties. We propose an algorithm called Con-LCB that guarantees a logarithmic regret, i.e., the average number of plays of all non-optimal arms is at most logarithmic in the horizon. The algorithm also outputs a boolean flag that correctly identifies, with high probability, whether the given instance is feasible/infeasible with respect to the constraints. We also show that Con-LCB is optimal within a universal constant, i.e., that more sophisticated algorithms cannot do much better universally. Finally, we establish a fundamental trade-off between regret minimization and feasibility identification. Our framework finds natural applications, for instance, in financial portfolio optimization, where risk constrained maximization of expected return is meaningful.
引用
收藏
页码:431 / 458
页数:28
相关论文
共 50 条
  • [1] Constrained regret minimization for multi-criterion multi-armed bandits
    Anmol Kagrecha
    Jayakrishnan Nair
    Krishna Jagannathan
    Machine Learning, 2023, 112 : 431 - 458
  • [2] Lenient Regret for Multi-Armed Bandits
    Merlis, Nadav
    Mannor, Shie
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8950 - 8957
  • [3] Communication-Efficient Collaborative Regret Minimization in Multi-Armed Bandits
    Karpov, Nikolai
    Zhang, Qin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13076 - 13084
  • [4] Memory-Constrained No-Regret Learning in Adversarial Multi-Armed Bandits
    Xu, Xiao
    Zhao, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 2371 - 2382
  • [5] Fairness and Welfare Quantification for Regret in Multi-Armed Bandits
    Barman, Siddharth
    Khan, Arindam
    Maiti, Arnab
    Sawarni, Ayush
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 6762 - 6769
  • [6] Bounded Regret for Finitely Parameterized Multi-Armed Bandits
    Panaganti, Kishan
    Kalathil, Dileep
    IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (03): : 1073 - 1078
  • [7] Individual Regret in Cooperative Nonstochastic Multi-Armed Bandits
    Bar-On, Yogev
    Mansour, Yishay
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk
    Chen, Tianrui
    Gangrade, Aditya
    Saligrama, Venkatesh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [9] On Kernelized Multi-armed Bandits
    Chowdhury, Sayak Ray
    Gopalan, Aditya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [10] Regional Multi-Armed Bandits
    Wang, Zhiyang
    Zhou, Ruida
    Shen, Cong
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84