Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models

被引:8
|
作者
Storme, Martin [1 ,2 ]
Myszkowski, Nils [3 ]
Baron, Simon [4 ]
Bernard, David [4 ]
机构
[1] IESEG Sch Management, F-59800 Lille, France
[2] LEM CNRS 9221, F-59800 Lille, France
[3] Pace Univ, Dept Psychol, New York, NY 10038 USA
[4] Assess First, F-75000 Paris, France
关键词
E-assessment; general mental ability; nested logit models; item-response theory; ability-based guessing; PROGRESSIVE MATRICES; REPLICATION; SELECTION; PRESSURE; CHOKING; ABILITY; MEMORY;
D O I
10.3390/jintelligence7030017
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample (N = 2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed.
引用
收藏
页数:22
相关论文
共 6 条