Sequential Normalized Maximum Likelihood in Log-loss Prediction

被引:0
|
作者
Kotlowski, Wojciech [1 ]
Grunwald, Peter [2 ]
机构
[1] Poznan Univ Tech, Inst Comp Sci, Piotrowo 2, PL-60965 Poznan, Poland
[2] Cent Wiskunde & Informat, NL-1098 XG Amsterdam, Netherlands
关键词
BOUNDS;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The paper considers sequential prediction of individual sequences with log loss using an exponential family of distributions. We first show that the commonly used maximum likelihood strategy is suboptimal and requires an additional assumption about boundedness of the data sequence. We then show that both problems can be be addressed by adding the currently predicted outcome to the calculation of the maximum likelihood, followed by normalization of the distribution. The strategy obtained in this way is known in the literature as the sequential normalized maximum likelihood (SNML) strategy. We show that for general exponential families, the regret is bounded by the familiar (k/2) log n and thus optimal up to O (1). We also introduce an approximation to SNML, flattened maximum likelihood, much easier to compute that SNML itself, while retaining the optimal regret under some additional assumptions. We finally discuss the relationship to the Bayes strategy with Jeffreys' prior.
引用
收藏
页码:547 / 551
页数:5
相关论文
共 50 条
  • [1] Asymptotic log-loss of prequential maximum likelihood codes
    Grünwald, P
    de Rooij, S
    [J]. LEARNING THEORY, PROCEEDINGS, 2005, 3559 : 652 - 667
  • [2] Sequential prediction under log-loss with side information
    Bhatt, Alankrita
    Kim, Young-Han
    [J]. ALGORITHMIC LEARNING THEORY, VOL 132, 2021, 132
  • [3] Exchangeability Characterizes Optimality of Sequential Normalized Maximum Likelihood and Bayesian Prediction
    Hedayati, Fares
    Bartlett, Peter L.
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2017, 63 (10) : 6767 - 6773
  • [4] Universal Batch Learning with Log-Loss
    Fogel, Yaniv
    Feder, Meir
    [J]. 2018 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2018, : 21 - 25
  • [5] On the Problem of On-line Learning with Log-Loss
    Fogel, Yaniv
    Feder, Meir
    [J]. 2017 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2017,
  • [6] A Multiple Description CEO Problem with Log-Loss Distortion
    Pichler, Georg
    Piantanida, Pablo
    Matz, Gerald
    [J]. 2017 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2017, : 111 - 115
  • [7] Label Inference Attacks from Log-loss Scores
    Aggarwal, Abhinav
    Kasiviswanathan, Shiva Prasad
    Xu, Zekun
    Feyisetan, Oluwaseyi
    Teissier, Nathanael
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] Regret Bounds for Log-Loss via Bayesian Algorithms
    Wu, Changlong
    Heidari, Mohsen
    Grama, Ananth
    Szpankowski, Wojciech
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2023, 69 (09) : 5971 - 5989
  • [9] Normalized maximum likelihood models for genomics
    Tabus, Ioan
    Rissanen, Jorma
    Astola, Jaakko
    [J]. 2007 9TH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, VOLS 1-3, 2007, : 1433 - 1438
  • [10] Model selection by normalized maximum likelihood
    Myung, JI
    Navarro, DJ
    Pitt, MA
    [J]. JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2006, 50 (02) : 167 - 179