The Effect of Model Misspecification on Computerized Testing Author:Hung-Yi Lu, Yung-Feng Hsu, Kuo-Sung Hsueh
Research Article
Item response theory (IRT) has been widely applied in computerized adaptive testing (CAT) with the logistic type models most often used. IRT prescribes an item characteristic curve that provides the probability of an examinee correctly answering an item with a parameter of a given ability level. Examiners can develop various tests for different purposes based on a chosen item response model. However, in actual testing practice, the priori item response model is often unknown. The purpose of this study is to examine the effect of model misspecification on computerized testing. Using norm-referenced testing, results indicated that model misspecification has an effect on the estimate of examinees’ abilities. Both the RMSE and test length significantly increased when the wrong item response models were used, especially when item bank belongs to three-parameter logistic model. In criterion-referenced testing, model misspecification has no effect on the accuracy of classification. However, it will increase test length and cost of testing.