Setting Standards and Detecting Intrajudge Inconsistency Using Interdependent Evaluation of Response Alternatives

Share/Save/Bookmark

Chang, Lei and Linden van der, Wim J. and Vos, Hans J. (2004) Setting Standards and Detecting Intrajudge Inconsistency Using Interdependent Evaluation of Response Alternatives. Educational and Psychological Measurement, 64 (5). pp. 781-801. ISSN 0013-1644

[img]PDF
Restricted to UT campus only
: Request a copy
125Kb
Abstract:This article introduces a new test-centered standard-setting method as well as a procedure to detect intrajudge inconsistency of the method. The standard-setting method that is based on interdependent evaluations of alternative responses has judges closely evaluate the process that examinees use to solve multiple-choice items. The new method is analyzed against existing methods, particularly the Nedelsky and Angoff methods. Empirical results from three different experiments confirm the hypothesis that standards set by the new method are higher than those of the Nedelsky but lower than those of the Angoff method. The procedure for detecting intrajudge inconsistency is based on residual diagnosis of the judgments, which makes it possible to identify the sources of inconsistencies in the items, response alternatives, and/or judges. An empirical application of the procedure in an experiment with the new standard-setting method suggests that the method is internally consistent and has also revealed an interesting difference between residuals for the correct and incorrect alternatives.

Item Type:Article
Copyright:© 2004 Sage
Faculty:
Behavioural Sciences (BS)
Research Group:
Link to this item:http://purl.utwente.nl/publications/60141
Official URL:http://dx.doi.org/10.1177/0013164404264847
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page

Metis ID: 219637