When: 2 December, 2020, kl. 13-14

Where: This seminar is given online. E-mail Dan Hedlin if you want to attend.

Abstract

In a test situation using item response theory (IRT), estimates of student abilities are dependent on the parameters of the items. Usually, item parameters are estimated (or calibrated) before they are used in a real testing situation. New items are often integrated into a test with the only purpose of testing and calibrating them for future use. The students who calibrate a specific new item can be chosen randomly or based on optimal experimental design theory (Ul Hassan and Miller, 2020).

In this seminar, we want to compare the performance of the random and the optimal design. While it is common to investigate new statistical methods using historical data, it is usually challenging or impossible to do this with new experimental design methods since no real data was collected according to the proposed designs. We present here a way to compare designs using simulations based on parameters estimated using real historical data from the Swedish Scholastic Aptitude Test (SweSAT).

Our analysis could identify situations when optimal designs are useful and when not. However, a main result of this analysis is that more work is necessary to develop the optimal design methodology for item calibration.