A comparative analysis of pre-equating and post-equating in a large-scale assessment, high stakes examination
DOI:
https://doi.org/10.38140/pie.v34i4.1980Abstract
Statistical procedure used in adjusting test score difficulties on test forms is known as “equating”. Equating makes it possible for various test forms to be used interchangeably. In terms of where the equating method fits in the assessment cycle, there are pre-equating and post-equating methods. The major benefits of pre-equating, when applied, are that it facilitates the operational processes of examination bodies in terms of rapid score reporting, quality control and flexibility in the assessment process. The purpose of this study is to ascertain if pre- and post-equating results are comparable. Data for this study, which adopted an equivalent group design method, was taken from the 2012 Unified Tertiary Matriculation Examination (UTME) pre-test and 2013 UTME post-test in Use of English (UOE) subject. A pre-equating model using the 3-parameter (3PL) Item Response Theory (IRT) model was used. IRT software was used for the item calibration. Pre- and post-equating were carried out using 100-items per test form in an UOE test. The results indicate that the raw-score and ability estimates between the pre-equated model and the post-equated model were comparable.