Academic journal article Psychological Test and Assessment Modeling

Using Response Time Data to Inform the Coding of Omitted Responses

Academic journal article Psychological Test and Assessment Modeling

Using Response Time Data to Inform the Coding of Omitted Responses

Article excerpt

(ProQuest: ... denotes formulae omitted.)

In many assessments there is a high likelihood that some examinees will omit at least one answer for one reason or another. This type of nonresponse may or may not be ability related. While low ability with respect to the measured construct may play a role, other reasons, such as low motivation, lack of attention, or running out of time may be likely possibilities. If the missing data are ignorable (i.e., missing at random or missing completely at random), estimates of item parameters and examinee ability in a latent variable model will be unbiased, but if they are not ignorable, the treatment of these values can introduce systematic error into parameter estimates (Rubin, 1976). When analyzing responses from test administrations in which nonresponse data are more than rarely occurring exceptions, some principled way of treating these data is required. This is true in operational analyses using either classical test theory (which typically requires complete data without missingness) or modern test theory such as item response theory (IRT; Lord & Novick, 1968) which, in principle, can handle data that are missing completely at random or missing at random. For this paper we primarily address nonresponse data treatments in the context of IRT or related methods. The goal of this study is to examine whether the coding of omitted responses based on response time information from a computer-based assessment in a low-stakes context can improve results compared to ad hoc methods (e.g., treating omitted responses as incorrect by default) typically applied in estimates of item/ability parameters. This goal is accomplished using empirical data from the Programme for the International Assessment of Adult Competencies (PIAAC) literacy and numeracy cognitive tests.



Before proceeding, it is important to clearly define the different types of nonresponse seen in large scale assessment data. We use the term "nonresponse" to refer to any value in a dataset of item responses that, after scoring, does not correspond to a correct or incorrect response code (or by extension for polytomous items, responses that do not correspond to a score category that influences an examinee's estimate of ability). In more simple terms, if an individual does not provide an answer to a given item, it is considered a nonresponse. If an examinee has no opportunity to respond to the item, either by design or because the individual did not see the item, we refer to these as not administered2 and not reached items respectively as "missing" responses. On the other hand, we use the term "omit" to refer to nonresponse values in cases where the examinee saw the item (or is believed to have seen the item) but no response was given. The reason for this distinction is that missing and omitted responses are treated differently for the purpose of item response modeling and/or scoring. Not reached items, not administered items, and omitted item responses all warrant a different treatment: An individual who never saw an item by design cannot be expected to respond, obviously. Similarly, an examinee who did not reach the last 2-3 items because of time constraints also had no chance to produce a response and may or may not have gotten the items correct. On the other hand, an individual who saw an item and decided not to provide a response may have done so due to an understanding that the item is too difficult, or due to other reasons such as a lack of motivation, or an intent to come back to this item later that was never acted upon.

Treatment of nonresponse data

Typically, nonresponse data are treated in one of two ways for the purpose of item analysis and scoring: 1) the values are coded as not administered and excluded from the estimation of item and/or ability parameters or 2) the values are coded as omits and scored as incorrect or partially correct. The former approach is generally applied for missing responses that appear sequentially, usually at the end of a test or test section. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.