Academic journal article Psychological Test and Assessment Modeling

Effect of Item Order on Item Calibration and Item Bank Construction for Computer Adaptive Tests

Academic journal article Psychological Test and Assessment Modeling

Effect of Item Order on Item Calibration and Item Bank Construction for Computer Adaptive Tests

Article excerpt

Abstract

Item banks are typically constructed from responses to items that are presented in one fixed order; therefore, order effects between subsequent items may violate the independence assumption. We investigated the effect of item order on item bank construction, item calibration, and ability estimation. 15 polytomous items similar to items used in a pilot version of a computer adaptive test for anxiety (Walter et al., 2005; Walter et al., 2007) were presented in one fixed order or in a order randomly generated for each respondent. A total of «=520 out-patients participated in the study. Item calibration (Generalized Partial Credit Model) yielded only small differences of slope and location parameters. Simulated test runs using either the full item bank or an adaptive algorithm produced very similar ability estimates (expected a posteriori estimation). These results indicate that item order had little impact on item calibration and ability estimation for this item set.

Key words: item response theory; computer adaptive testing; local independence; item bank construction

1. Introduction

Local item independence is a central assumption of almost any application of Item Response Theory models. Items are locally independent if for respondents at the same level of the underlying latent trait ? responses to any given item are independent of responses to other items of the test (Henning, 1989). Local independence does not prevent items from correlating across the range of all observed ability levels, but it does imply lack of correlation among items if the ability level is fixed. Therefore, local independence is a way to state that it is indeed the latent trait that explains the relations between item responses. Local independence may be violated if other person parameters such as other latent traits are involved in the responses. If this is the case, responses have to be explained by multiple latent variables rather than by one underlying latent trait only, and, therefore, the application of a unidimensional item response model may no longer be appropriate. Lack of independence can also ensue if the response to one item is no longer independent of the responses to previous items. This type of response dependence can occur when previous items contain clues to following items and item order obviously plays an important role here. In the literature, these two types of item dependence, trait multidimensionality and response dependence, are often not clearly distinguished from each other and checking an item bank for local independence is often simply referred to as "ensuring unidimensionality". This is particularly true when unidimensional item response models are used, which, despite the rising interest in multidimensional item response models (e.g. Reckase, 2009), are still dominant in practical applications of item response theory such as the construction of item banks for computer adaptive testing. Table 1 shows the steps required to construct an item bank for unidimensional computer adaptive testing (Walter, 2010). Local item independence and item order play a crucial role in this process. For the construction of the item bank, the order of presentation of the items is typically fixed. In an adaptive test, the item selection algorithm determines the order of presentation and this order can vary for each respondent.

The purpose of the present study is to investigate the impact of item order on item bank construction. The general idea is to compare item parameter estimates obtained from responses given to items presented in fixed order with item parameters estimated from responses given to items that were presented in random order. Numerical differences in item parameter estimates may or may not have significant impact on ability estimates. Practitioners are usually much more interested in ability levels of respondents rather than in item parameters. The focus of this study is, therefore, on quantifying how much ability level estimations differ when item banks are constructed from responses to items given in fixed versus in random order. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.