Sensitivity of Ensemble Forecast Verification to Model Bias

被引:34
|
作者
Wang, Jingzhuo [1 ,2 ]
Chen, Jing [2 ]
Du, Jun [3 ]
Zhang, Yutao [2 ]
Xia, Yu [4 ]
Deng, Guo [2 ]
机构
[1] China Meteorol Adm, Chinese Acad Meteorol Sci, Beijing, Peoples R China
[2] China Meteorol Adm, Numer Weather Predict Ctr, Beijing, Peoples R China
[3] NOAA, Environm Modeling Ctr, NWS, NCEP, College Pk, MD USA
[4] Nanjing Univ Informat Sci & Technol, Nanjing, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Ensembles; Forecast verification; skill; RANKED PROBABILITY SCORE; TRANSFORM KALMAN FILTER; TEMPERATURE FORECASTS; INITIAL PERTURBATIONS; MESOSCALE; SPREAD; RELIABILITY; SYSTEM; ERROR;
D O I
10.1175/MWR-D-17-0223.1
中图分类号
P4 [大气科学(气象学)];
学科分类号
0706 ; 070601 ;
摘要
This study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent strong and weak bias situations. Ensemble spread and probabilistic forecasts are compared before and after a bias correction. The results show that the conclusions drawn from ensemble verification about the EPS are dramatically different with or without model bias. This is true for both ensemble spread and probabilistic forecasts. The GRAPES-REPS is severely underdispersive before the bias correction but becomes calibrated afterward, although the improvement in the spread's spatial structure is much less; the spread-skill relation is also improved. The probabilities become much sharper and almost perfectly reliable after the bias is removed. Therefore, it is necessary to remove forecast biases before an EPS can be accurately evaluated since an EPS deals only with random error but not systematic error. Only when an EPS has no or little forecast bias, can ensemble verification metrics reliably reveal the true quality of an EPS without removing forecast bias first. An implication is that EPS developers should not be expected to introduce methods to dramatically increase ensemble spread (either by perturbation method or statistical calibration) to achieve reliability. Instead, the preferred solution is to reduce model bias through prediction system developments and to focus on the quality of spread (not the quantity of spread). Forecast products should also be produced from the debiased but not the raw ensemble.
引用
收藏
页码:781 / 796
页数:16
相关论文
共 50 条
  • [1] Tropical cyclone forecast from NCMRWF global ensemble forecast system, verification and bias correction
    Dube, Anumeha
    Ashrit, Raghavendra
    Ashish, Amit
    Iyengar, Gopal
    Rajagopal, E. N.
    [J]. MAUSAM, 2015, 66 (03): : 511 - 528
  • [2] Bias Correction for Global Ensemble Forecast
    Cui, Bo
    Toth, Zoltan
    Zhu, Yuejian
    Hou, Dingchen
    [J]. WEATHER AND FORECASTING, 2012, 27 (02) : 396 - 410
  • [3] The use of envelope functions in ensemble forecast verification
    Mullen, SL
    Sanders, F
    Baumhefner, DP
    [J]. 14TH CONFERENCE ON PROBABILITY AND STATISTICS IN THE ATMOSPHERIC SCIENCES, 1998, : 150 - 151
  • [4] The fractions skill score for ensemble forecast verification
    Necker, Tobias
    Wolfgruber, Ludwig
    Kugler, Lukas
    Weissmann, Martin
    Dorninger, Manfred
    Serafin, Stefano
    [J]. QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, 2024, : 4457 - 4477
  • [5] On the properties of ensemble forecast sensitivity to observations
    Kotsuki, Shunji
    Kurosawa, Kenta
    Miyoshi, Takemasa
    [J]. QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, 2019, 145 (722) : 1897 - 1914
  • [6] Model bias correction for dust storm forecast using ensemble Kalman filter
    Lin, Caiyan
    Zhu, Jiang
    Wang, Zifa
    [J]. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES, 2008, 113 (D14)
  • [7] Diagnosing the Sensitivity of Binary Image Measures to Bias, Location, and Event Frequency within a Forecast Verification Framework
    Schwedler, Benjamin R. J.
    Baldwin, Michael E.
    [J]. WEATHER AND FORECASTING, 2011, 26 (06) : 1032 - 1044
  • [8] Extending the limits of ensemble forecast verification with the minimum spanning tree
    Smith, LA
    Hansen, JA
    [J]. MONTHLY WEATHER REVIEW, 2004, 132 (06) : 1522 - 1528
  • [9] EFSR: Ensemble Forecast Sensitivity to Observation Error Covariance
    Hotta, Daisuke
    Kalnay, Eugenia
    Ota, Yoichiro
    Miyoshi, Takemasa
    [J]. MONTHLY WEATHER REVIEW, 2017, 145 (12) : 5015 - 5031
  • [10] An ensemble forecast model of iceberg drift
    Allison, K.
    Crocker, G.
    Tran, H.
    Carrieres, T.
    [J]. COLD REGIONS SCIENCE AND TECHNOLOGY, 2014, 108 : 1 - 9