Academic journal article Psychological Test and Assessment Modeling

Detecting Unmotivated Individuals with a New Model-Selection Approach for Rasch Models

Academic journal article Psychological Test and Assessment Modeling

Detecting Unmotivated Individuals with a New Model-Selection Approach for Rasch Models

Article excerpt

(ProQuest: ... denotes formulae omitted.)

1 Introduction

Psychological tests in general and achievement tests in particular are regularly used for psychological assessment, the evaluation of educational institutions and psychological research. The responses of the test takers to the test items are usually analysed with an item response model. Item response models are based on the assumption that the observed responses are governed by a specific trait, which is latent and can not be observed directly. Item response models specify the relation between the observable responses and the latent trait. This relation can then be used for inferring the trait level of each test taker from his/her responses given in the test. The process of trait inference crucially depends on how well the item response model is able to represent the relation between the trait and its manifestations. Valid inference requires the choice of an adequate item response model as well as precise estimates of the model's parameters. Trait inference can seriously be wrong in case the parameter estimates are biased and deviate sharply from the true values.

With the exception of test applications in psychological or educational assessment, the test results usually do not have major personal consequences for the test takers. Such a situation is called low-stakes testing. As there is no extrinsic reward for high test scores, some test takers have little motivation to perform as well as they could, especially when there is little freedom to choose whether to take the test or not. Unmotivated test takers make little effort to regularly solve the test items, but rely on approximations and short cut strategies to avoid as much mental effort as possible. One extreme form is carelessly responding, where answers are given fast, without any serious engagement in active problem solving. It is well known that carelessly responding distorts the estimation of the item parameters (Bolt, Cohen, & Wollack, 2002; Oshima, 1994; Schnipke, 1999) and undermines score validity (Wise & DeMars, 2006; Wise & Kong, 2005). Items for example appear more difficult and the item discrimination is reduced. This can have serious consequences for psychological assessment. Hence, it would be beneficial to identify unmotivated individuals and to reduce their distorting effect on model calibration. Several methods have been suggested for this purpose so far.

A first approach to handle careless responding relies on discrete mixtures of item response models (Rost, 1990). The approach is based on the assumption that the test takers can be divided into several classes with respect to their mode of responding. In the simplest case, just two classes are assumed: A first class, which consists of the responders that respond always in a regular way and a second class, which consists of the responders that respond irregularly by using short cut strategies in at least a subset of the items. The mixture item response models account for this structure of the data by allowing for different item response models in the two subgroups of test takers; see Bolt et al. (2002) for an application to low-stakes tests. Alternatively, instead of dividing the test takers into just two classes, one can assume one class of regular responders and several subclasses of irregular responders defined by the position in the test where the subjects start to respond carelessly. Although originally these models were supposed for test speededness and the effect of running out of time (Yamamoto & Everson, 1997), they can also be used for situations where individuals lose their motivation during the test, for example when items become too difficult (Cao & Stokes, 2008). As mixture item response models allow for different parameter values in the different latent classes, they are able to recover the item parameters in the class of the regular responders and can be used for segregating the test takers according to their mode of responding. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.