Tight Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

被引:0
|
作者
Bilodeau, Blair [1 ,2 ,3 ]
Foster, Dylan J. [4 ]
Roy, Daniel M. [2 ,3 ]
机构
[1] Univ Toronto, Stat Sci, Toronto, ON, Canada
[2] Vector Inst, Toronto, ON, Canada
[3] Inst Adv Study, Olden Lane, Princeton, NJ 08540 USA
[4] MIT, Cambridge, MA 02139 USA
基金
加拿大自然科学与工程研究理事会;
关键词
UNIVERSAL PORTFOLIOS; INFORMATION; PREDICTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the classical problem of sequential probability assignment under logarithmic loss while competing against an arbitrary, potentially nonparametric class of experts. We obtain tight bounds on the minimax regret via a new approach that exploits the self-concordance property of the logarithmic loss. We show that for any expert class with (sequential) metric entropy O (gamma(-p)) at scale gamma, the minimax regret is O (n(p/p+1)), and that this rate cannot be improved without additional assumptions on the expert class under consideration. As an application of our techniques, we resolve the minimax regret for nonparametric Lipschitz classes of experts.
引用
收藏
页数:11
相关论文
共 12 条
  • [1] Tight Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance
    Bilodeau, Blair
    Foster, Dylan J.
    Roy, Daniel M.
    [J]. 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [2] Regret Bounds and Minimax Policies under Partial Monitoring
    Audibert, Jean-Yves
    Bubeck, Sebastien
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2010, 11 : 2785 - 2836
  • [3] Regret bounds and minimax policies under partial monitoring
    Audibert, Jean-Yves
    Bubeck, Sébastien
    [J]. Journal of Machine Learning Research, 2010, 11 : 2785 - 2863
  • [4] Scalable Sparse Covariance Estimation via Self-Concordance
    Kyrillidis, Anastasios
    Mahabadi, Rabeeh Karimi
    Quoc Tran Dinh
    Cevher, Volkan
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 1946 - 1952
  • [5] Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers
    Negrea, Jeffrey
    Bilodeau, Blair
    Campolongo, Nicolo
    Orabona, Francesco
    Roy, Daniel M.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [6] Regret Bounds for Log-Loss via Bayesian Algorithms
    Wu, Changlong
    Heidari, Mohsen
    Grama, Ananth
    Szpankowski, Wojciech
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2023, 69 (09) : 5971 - 5989
  • [7] Tight minimax rates for manifold estimation under Hausdorff loss
    Kim, Arlene K. H.
    Zhou, Harrison H.
    [J]. ELECTRONIC JOURNAL OF STATISTICS, 2015, 9 (01): : 1562 - 1582
  • [8] Precise Regret Bounds for Log-loss via a Truncated Bayesian Algorithm
    Wu, Changlong
    Heidari, Mohsen
    Grama, Ananth
    Szpankowski, Wojciech
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [9] Adaptive Minimax Regret against Smooth Logarithmic Losses over High-Dimensional 11-Balls via Envelope Complexity
    Miyaguchi, Kohei
    Yamanishi, Kenji
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [10] Posterior regret Γ-minimax estimation and prediction with applications on k-records data under entropy loss function
    Jozani, Mohammad Jafari
    Parsian, Ahmad
    [J]. COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2008, 37 (14) : 2202 - 2212