Academic journal article Journal of Instructional Psychology

The Psychometric Benefits of Soft-Linked Items: A Reply to Pope and Harley

Academic journal article Journal of Instructional Psychology

The Psychometric Benefits of Soft-Linked Items: A Reply to Pope and Harley

Article excerpt

In this issue, Pope and Harley criticized our recent work with soft-linked items (Loerke, Jones, and Chow, 1999), claiming that soft-linked items are not independent, and thus, violate the basic assumption of classical test theory. Furthermore, they claim that our findings that soft-linked items had better point-biserial correlation coefficients (PBCCs) than hard-linked items could have been predicted by "common sense," in that they simply reflect a higher proportion correct for soft-linked items. Because an examinee's response to an initial item has no effect on the scoring of the second item in a soft-linked pair, soft-linked items clearly meet independence, and cause no problem for classical test theory. Since the scoring outcomes of hard-linked items are more likely to be consistent (correct or incorrect) than those for soft-linked items, common sense suggests that hard-linked items would produce higher PBCCs than soft-linked items. In addition, it is pointed out that Pope and Harley's lack of understanding of the concepts of local independence and unidimensionality within the framework of item-response theory may have provided them with nebulous logic that will confuse readers.



We recently presented evidence that soft-linked items have better psychometric properties than hard-linked items in achievement tests (Loerke, Jones, & Chow, 1999). Pope and Harley (in the current issue of this journal) have criticized this work, claiming that the findings are "common sense" and that differences between hard-linked items and soft-linked items are due to an increased probability that soft-linked items will be scored as correct. Pope and Harley also claimed that soft-linked items are not independent. We acknowledge that there is indeed a link artifact, but this artifact exists only for hard-linked items and not for soft-linked items. We would like to point out that soft-linked items are indeed independent (this is the major appeal of them) and it is hard-linked items that are not.

Linked items are items in which the examinee uses his/her answer from one item to compute an answer for a second item, typically using the numerical response format. These items are often used in multi-step calculations, allowing hierarchical computer scoring of complex reasoning. Linked items are a computer-scorable alternative to constructed response questions.

Hard-linked items require the examinee to get the first linked item correct before any of the subsequent linked items may be answered correctly. Thus, hard-linked items have a fixed key. Conversely, soft-linked items do not require the examinee to answer the first linked item correctly to have his/her response scored as correct on the subsequent linked item; that is, soft-linked items have an adaptive key. One method of accomplishing this is by a computer algorithm that generates the appropriate keys for the soft-linked items on the basis of the response given to the previous item. Using this method, examinees are not penalized twice for a single incorrect response if the soft-linked item is answered correctly using the initial incorrect answer. Although several items may be linked together (nested linking), we will limit our discussion to the simple case of a single item linked with an initial item.

Linked Items and Independence

Pope and Harley made the claim that linked items, in general, violate independence. However, they failed to make the distinction between the two types of linked items. Item independence is a fundamental assumption of classical test theory that states that item responses are randomly related when ability (_) is held constant (Nunnally & Bernstein, 1994). Thus, all shared variance between items that is not explained by _must be due to pure random error. Alternatively phrased, score on an item must be a function of_ only, not a function of score on any other item or a function of any other trait. …

Author Advanced search


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.