A decision-theoretic extension of stochastic complexity and its applications to learning

被引:52
|
作者
Yamanishi, K [1 ]
机构
[1] NEC Res Inst, Princeton, NJ 08540 USA
关键词
aggregating strategy; batch-learning; complexity regularization; extended stochastic complexity; MDL principle; on-line prediction; stochastic complexity;
D O I
10.1109/18.681319
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Rissanen has introduced stochastic complexity to define the amount of information in a given data sequence relative to a given hypothesis class of probability densities, where the information is measured in terms of the logarithmic loss associated with universal data compression. This paper introduces the notion of extended stochastic complexity (ESC) and demonstrates its effectiveness in design and analysis of learning algorithms in on-line prediction and batch-learning scenarios. ESC can be thought of as an extension of Rissanen's stochastic complexity to the decision-theoretic setting where a general real-valued function is used as a hypothesis and a general loss function is used as a distortion measure. As an application of ESC to online prediction, this paper shows that a sequential realization of ESC produces an on-line prediction algorithm called Vovk's aggregating strategy, which can be thought of as an extension of the Bayes algorithm. We derive upper bounds on the cumulative loss for the aggregating strategy both of an expected form and a worst case form in the case where the hypothesis class is continuous. As an application of ESC to batch-learning, this paper shows that a batch-approximation of ESC induces a batch-learning algorithm called the minimum L-complexity algorithm (MLC), which is an extension of the minimum description length (MDL) principle. We derive upper bounds on the statistical risk for MLC, which are least to date. Through ESC we give a unifying view of the most effective learning algorithms that have recently been explored in computational learning theory.
引用
收藏
页码:1424 / 1439
页数:16
相关论文
共 50 条
  • [1] ON STOCHASTIC COMPLEXITY ESTIMATION - A DECISION-THEORETIC APPROACH
    QIAN, GQ
    GABOR, G
    GUPTA, RP
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 1994, 40 (04) : 1181 - 1191
  • [2] Complexity of decision-theoretic troubleshooting
    Vomlelová, M
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2003, 18 (02) : 267 - 277
  • [3] DECISION-THEORETIC TROUBLESHOOTING
    HECKERMAN, D
    BREESE, JS
    ROMMELSE, K
    [J]. COMMUNICATIONS OF THE ACM, 1995, 38 (03) : 49 - 57
  • [4] Decision-Theoretic Psychiatry
    Huys, Quentin J. M.
    Guitart-Masip, Marc
    Dolan, Raymond J.
    Dayan, Peter
    [J]. CLINICAL PSYCHOLOGICAL SCIENCE, 2015, 3 (03) : 400 - 421
  • [5] Learning Optimal Parameters in Decision-Theoretic Rough Sets
    Herbert, Joseph P.
    Yao, JingTao
    [J]. ROUGH SETS AND KNOWLEDGE TECHNOLOGY, PROCEEDINGS, 2009, 5589 : 610 - 617
  • [6] Framework of a decision-theoretic tutoring system for learning of mechanics
    Pek P.-K.
    Poh K.-L.
    [J]. Journal of Science Education and Technology, 2000, 9 (4) : 343 - 356
  • [7] Extended stochastic complexity and its applications to learning
    Yamanishi, K
    [J]. ADVANCES IN MINIMUM DESCRIPTION LENGTH THEORY AND APPLICATIONS, 2005, : 215 - 244
  • [8] Selective Supervision: Guiding Supervised Learning with Decision-Theoretic Active Learning
    Kapoor, Ashish
    Horvitz, Eric
    Basu, Sumit
    [J]. 20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 877 - 882
  • [9] Decision-theoretic Clustering of Strategies
    Bard, Nolan
    Nicholas, Deon
    Szepesvari, Csaba
    Bowling, Michael
    [J]. PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS (AAMAS'15), 2015, : 17 - 25
  • [10] On decision-theoretic foundations for defaults
    Brafman, RI
    Friedman, N
    [J]. ARTIFICIAL INTELLIGENCE, 2001, 133 (1-2) : 1 - 33