Predictability and information theory. Part I: Measures of predictability

被引:0
|
作者
DelSole, T
机构
[1] Ctr Ocean Land Atmosphere Studies, Calverton, MD 20705 USA
[2] George Mason Univ, Fairfax, VA 22030 USA
关键词
D O I
10.1175/1520-0469(2004)061<2425:PAITPI>2.0.CO;2
中图分类号
P4 [大气科学(气象学)];
学科分类号
0706 ; 070601 ;
摘要
This paper gives an introduction to the connection between predictability and information theory, and derives new connections between these concepts. A system is said to be unpredictable if the forecast distribution, which gives the most complete description of the future state based on all available knowledge, is identical to the climatological distribution, which describes the state in the absence of time lag information. It follows that a necessary condition for predictability is for the forecast and climatological distributions to differ. Information theory provides a powerful framework for quantifying the difference between two distributions that agrees with intuition about predictability. Three information theoretic measures have been proposed in the literature: predictive information, relative entropy, and mutual information. These metrics are discussed with the aim of clarifying their similarities and differences. All three metrics have attractive properties for defining predictability, including the fact that they are invariant with respect to nonsingular linear transformations, decrease monotonically in stationary Markov systems in some sense, and are easily decomposed into components that optimize them ( in certain cases). Relative entropy and predictive information have the same average value, which in turn equals the mutual information. Optimization of mutual information leads naturally to canonical correlation analysis, when the variables are joint normally distributed. Closed form expressions of these metrics for finite dimensional, stationary, Gaussian, Markov systems are derived. Relative entropy and predictive information differ most significantly in that the former depends on the "signal to noise ratio'' of a single forecast distribution, whereas the latter does not. Part II of this paper discusses the extension of these concepts to imperfect forecast models.
引用
收藏
页码:2425 / 2440
页数:16
相关论文
共 50 条
  • [1] Predictability and information theory. Part II: Imperfect forecasts
    DelSole, T
    JOURNAL OF THE ATMOSPHERIC SCIENCES, 2005, 62 (09) : 3368 - 3381
  • [2] Average Predictability Time. Part I: Theory
    DelSole, Timothy
    Tippett, Michael K.
    JOURNAL OF THE ATMOSPHERIC SCIENCES, 2009, 66 (05) : 1172 - 1187
  • [3] Information Theory and Dynamical System Predictability
    Kleeman, Richard
    ENTROPY, 2011, 13 (03) : 612 - 649
  • [4] Quantifying the predictability of winter river flow in Iberia.: Part I:: Interannual predictability
    Gamiz-Fortis, Sonia
    Pozo-Vazquez, David
    Trigo, Ricardo M.
    Castro-Diez, Yolanda
    JOURNAL OF CLIMATE, 2008, 21 (11) : 2484 - 2502
  • [5] Stock market daily volatility and information measures of predictability
    D'Amico, Guglielmo
    Gismondi, Fulvio
    Petroni, Filippo
    Prattico, Flavio
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2019, 518 : 22 - 29
  • [6] Information theory, predictability and the emergence of complex life
    Seoane, Luis F.
    Sole, Ricard V.
    ROYAL SOCIETY OPEN SCIENCE, 2018, 5 (02):
  • [7] Predictability: Recent insights from information theory
    DelSole, Timothy
    Tippett, Michael K.
    REVIEWS OF GEOPHYSICS, 2007, 45 (04)
  • [8] Valuation Theory. Part I
    Bancerek, Grzegorz
    Kobayashi, Hidetsune
    Kornilowicz, Artur
    FORMALIZED MATHEMATICS, 2012, 20 (01): : 7 - 14
  • [9] Information theory and predictability for low-frequency variability
    Abramov, R
    Majda, A
    Kleeman, R
    JOURNAL OF THE ATMOSPHERIC SCIENCES, 2005, 62 (01) : 65 - 87
  • [10] Evaluation of the predictability of fishing forecasts using information theory
    Baba, Shinya
    Matsuishi, Takashi
    FISHERIES SCIENCE, 2014, 80 (03) : 427 - 434