THE ITEM RESPONSE THEORY (IRT) FOR THE ANALYSIS OF THE UNIVERSITY ENTRY TESTS

被引:0
|
作者
Tammaro, R. [1 ]
Marzano, A. [1 ]
Notti, A. [1 ]
机构
[1] Univ Salerno, Salerno, Italy
关键词
Item analysis; item response theory; classic test theory; assessment; test;
D O I
暂无
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
For several years, in our country, it is most frequently realized the need to detect, treat, analyze and interpret data and information; the need to have therefore measuring instruments suitable for the detection and the organization of all those elements through which certain phenomenon occurs. One of these tools, even if not the most used, is represented by the tests. Each year thousands of students can see opened or precluded the doors of the university by the results of a test. To make all these things there must be some specialists, as rightly pointed out Bottani (2011); to create good tests it is necessary skill and practice, which cannot be replaced by a rapid reading of instructions (Lucisano 2011). This complex and articulated operation becomes even more difficult when tests are made to select, and, as in our case, to foreclose the possible inclusion to a degree course. It is thus necessary is to verify the quality of the entry tests used in order to check their validity and reliability and to suggest any change to make them congruent to the purposes for which they were built. This means to pay attention not only to the content, the minimum knowledge, but, "above all", the part does not immediately visible and still partially neglected related to the "technical construction" of the test, that is, its accuracy, its clarity, and the perception derived. The "quality, validity and reliability" of the questions, however, are closely related to the answers provided by students. The item psychometric examination of the test is, in fact, an important step in the test's construction process. The results obtained through the item analysis will make possible the choice of questions with parameter values able to increase reliability requirements of the test and determine the level of difficulty and the discriminating capacity of it (Barbaranelli-Natali, 2005). This process of analysis can be carried out through two different approaches whose characteristics and assumptions are strongly differentiated: Classical Test Theory (CTT) and Item Response Theory (IRT). In the first one, the answers obtained are subjected to statistical analysis to highlight the items too easy or too difficult; if there are distractors which have a percentage of choice too low or too high; to check if the items are able to discriminate the candidates more competent than those less competent; to perform the appropriate calibrations in the tests. The IRT, on the other hand, allows to evaluate the performance of the subject as a function of a latent abilities through the specification of a statistical model-mathematician which allows you to reach not only the evaluation of the performance of the individual, but also the characteristics of each question. Having said that, in the research project of the present paper, the research team has decided to analyze and correlate the results of entry tests to the degree course in Primary Teach Education over the last five years by using both approaches but focusing the attention on the potential of IRT models which allow to obtain objective and universal measures transcending the context of the measurement and the instrument used.
引用
收藏
页码:79 / 89
页数:11
相关论文
共 50 条
  • [31] A new scoring method for item response theory analysis of C-Tests
    Effatpanah, Farshad
    Baghaei, Purya
    Tabatabaee-Yazdi, Mona
    Babaii, Esmat
    LANGUAGE TESTING, 2024,
  • [32] A Comparison the Information Functions of the Item and Test in One, Two and Three Parametric Model of the Item Response Theory (IRT)
    Moghadamzadeh, Ali
    Salehi, Keyvan
    Khodaie, Ebrahim
    2ND INTERNATIONAL CONFERENCE ON EDUCATION AND EDUCATIONAL PSYCHOLOGY 2011, 2011, 29
  • [33] Item response theory analysis of cognitive tests in people with dementia: a systematic review
    McGrory, Sarah
    Doherty, Jason M.
    Austin, Elizabeth J.
    Starr, John M.
    Shenkin, Susan D.
    BMC PSYCHIATRY, 2014, 14
  • [34] Item response theory analysis of cognitive tests in people with dementia: a systematic review
    Sarah McGrory
    Jason M Doherty
    Elizabeth J Austin
    John M Starr
    Susan D Shenkin
    BMC Psychiatry, 14
  • [35] Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests
    Hula, William D.
    Fergadiotis, Gerasimos
    Martin, Nadine
    AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY, 2012, 21 (02) : S38 - S50
  • [36] The local reliability of the 15-item version of the Geriatric Depression Scale: An item response theory (IRT) study
    Chiesi, Francesca
    Primi, Caterina
    Pigliautile, Martina
    Ercolani, Sara
    della Staffa, Manuela Conestabile
    Longo, Annalisa
    Boccardi, Virginia
    Mecocci, Patrizia
    JOURNAL OF PSYCHOSOMATIC RESEARCH, 2017, 96 : 84 - 88
  • [37] Item response theory (IRT) coupled with computerized assessments: A powerful method to inform instructors
    Atwood, Charles H.
    Behmke, Derek A.
    Moody, John D.
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2013, 245
  • [39] Estimating student ability and problem difficulty using item response theory (IRT) and TrueSkill
    Lee, Youngjin
    INFORMATION DISCOVERY AND DELIVERY, 2019, 47 (02) : 67 - 75
  • [40] Ambulation behaviour compared across three studies using Item Response Theory (IRT)
    Winter, J.
    Johnston, M.
    Sniehotta, F. F.
    Pollard, B.
    Michie, C.
    PSYCHOLOGY & HEALTH, 2006, 21 : 164 - 164