An IRT forecasting model: linking proper scoring rules to item response theory

被引:0
|
作者
Bo, Yuanchao Emily [1 ]
Budescu, David V. [2 ]
Lewis, Charles [2 ]
Tetlock, Philip E. [3 ]
Mellers, Barbara [3 ]
机构
[1] NWEA, Portland, OR 97209 USA
[2] Fordham Univ, Bronx, NY 10458 USA
[3] Univ Penn, Philadelphia, PA 19104 USA
来源
JUDGMENT AND DECISION MAKING | 2017年 / 12卷 / 02期
关键词
IRT; Forecasting; Brier scores; Proper Scoring Rules; Good Judgment Project; Gibbs sampling; DOMINANCE ANALYSIS; PREDICTORS; ACCURACY;
D O I
暂无
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
This article proposes an Item Response Theoretical (IRT) forecasting model that incorporates proper scoring rules and provides evaluations of forecasters' expertise in relation to the features of the specific questions they answer. We illustrate the model using geopolitical forecasts obtained by the Good Judgment Project (GJP) (see Mellers, Ungar, Baron, Ramos, Gurcay, Fincher, Scott, Moore, Atanasov, Swift, Murray, Stone & Tetlock, 2014). The expertise estimates from the IRT model, which take into account variation in the difficulty and discrimination power of the events, capture the underlying construct being measured and are highly correlated with the forecasters' Brier scores. Furthermore, our expertise estimates based on the first three years of the GJP data are better predictors of both the forecasters' fourth year Brier scores and their activity level than the overall Brier scores obtained and Merkle's (2016) predictions, based on the same period. Lastly, we discuss the benefits of using event-characteristic information in forecasting.
引用
收藏
页码:90 / 103
页数:14
相关论文
共 50 条
  • [1] Theory and applications of proper scoring rules
    Dawid A.P.
    Musio M.
    METRON, 2014, 72 (2) : 169 - 183
  • [2] ITEM RESPONSE THEORY (IRT): STATE OF THE ART
    Heydari, Pooneh
    MODERN JOURNAL OF LANGUAGE TEACHING METHODS, 2015, 5 (01): : 134 - 144
  • [3] On proper scoring rules and cumulative prospect theory
    Carvalho, Arthur
    Dimitrov, Stanko
    Larson, Kate
    EURO JOURNAL ON DECISION PROCESSES, 2018, 6 (3-4) : 343 - 376
  • [4] Scale Linking for the Testlet Item Response Theory Model
    Kim, Seonghoon
    Kolen, Michael J.
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2022, 46 (02) : 79 - 97
  • [5] PROC IRT: A SAS Procedure for Item Response Theory
    Cole, Ki Matlock
    Paek, Insu
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2017, 41 (04) : 311 - 320
  • [6] Application of Item Response Theory (IRT)-Graded Response Model (GRM) to Entrepreneurial Ecosystem Scale
    Sethar, Waqar Ahmed
    Pitafi, Adnan
    Bhutto, Arabella
    Nassani, Abdelmohsen A.
    Haffar, Mohamed
    Kamran, Shah Muhammad
    SUSTAINABILITY, 2022, 14 (09)
  • [7] A Comparison the Information Functions of the Item and Test in One, Two and Three Parametric Model of the Item Response Theory (IRT)
    Moghadamzadeh, Ali
    Salehi, Keyvan
    Khodaie, Ebrahim
    2ND INTERNATIONAL CONFERENCE ON EDUCATION AND EDUCATIONAL PSYCHOLOGY 2011, 2011, 29
  • [8] py-irt: A Scalable Item Response Theory Library for
    Lalor, John Patrick
    Rodriguez, Pedro
    INFORMS JOURNAL ON COMPUTING, 2023, 35 (01) : 5 - 13
  • [9] Using Item Response Theory (IRT) to select hints in an ITS
    Timms, Michael J.
    ARTIFICIAL INTELLIGENCE IN EDUCATION: BUILDING TECHNOLOGY RICH LEARNING CONTEXTS THAT WORK, 2007, 158 : 213 - 221
  • [10] THE ITEM RESPONSE THEORY (IRT) FOR THE ANALYSIS OF THE UNIVERSITY ENTRY TESTS
    Tammaro, R.
    Marzano, A.
    Notti, A.
    5TH INTERNATIONAL CONFERENCE OF EDUCATION, RESEARCH AND INNOVATION (ICERI 2012), 2012, : 79 - 89