Comparing the Difficulty of Examination Subjects with Item Response Theory

Share/Save/Bookmark

Korobko, Oksana B. and Glas, Cees A.W. and Bosker, Roel J. and Luyten, Johan W. (2008) Comparing the Difficulty of Examination Subjects with Item Response Theory. Journal of Educational Measurement, 45 (2). pp. 139-157. ISSN 0022-0655

[img]PDF
Restricted to UT campus only
: Request a copy
509Kb
Abstract:Methods are presented for comparing grades obtained in a situation where students can choose between different subjects. It must be expected that the comparison between the grades is complicated by the interaction between the students' pattern and level of proficiency on one hand, and the choice of the subjects on the other hand. Three methods based on item response theory (IRT) for the estimation of proficiency measures that are comparable over students and subjects are discussed: a method based on a model with a unidimensional representation of proficiency, a method based on a model with a multidimensional representation of proficiency, and a method based on a multidimensional representation of proficiency where the stochastic nature of the choice of examination subjects is explicitly modeled. The methods are compared using the data from the Central Examinations in Secondary Education in the Netherlands. The results show that the unidimensional IRT model produces unrealistic results, which do not appear when using the two multidimensional IRT models. Further, it is shown that both the multidimensional models produce acceptable model fit. However, the model that explicitly takes the choice process into account produces the best model fit.
Item Type:Article
Copyright:© 2008 National Council on Measurement in Education
Faculty:
Behavioural Sciences (BS)
Research Group:
Link to this item:http://purl.utwente.nl/publications/60253
Official URL:http://dx.doi.org/10.1111/j.1745-3984.2007.00057.x
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page

Metis ID: 248952