# Investigating Test Equating Methods in Small Samples through Various Factors

## Article excerpt

(ProQuest: ... denotes formulae omitted)

In education and psychology, tests and scales are widely used for monitoring the learning levels of examinees, placing them within upper-instructional levels, selecting staff members, conducting guidance, and performing clinical services. Important decisions are made about examinees by taking into account their test or scale scores. However, the scores obtained from tests and scales have to be accurate in order to make a fair decision about examinees.

Sometimes tests and scales are administrated at different times; due to security reasons, different parallel test forms can be used at the same time to overcome this problem. However, this situation causes some problems. Even if test developers construct test forms whose content and statistical characteristics are the same, the forms can have different difficulty levels. While some test forms consist of easy items, others may have difficult items that can cause examinees' scores to differentiate. To overcome this problem, test forms that are constructed at different times should be arranged as different forms of the same test. This case, however, leads to concerns about the simplicity or difficulty of the test forms. The scores of the examinees who are tested at different times cannot be directly compared. When two different forms measuring the same construct are administrated to different groups of examinees, the difficulty of the items on the test forms may not be equal. Test equating is used to overcome these difficulties and can interchangeably interpret the scores obtained from test forms (Kolen & Brennan, 2004; vonDavier, Holland, & Thayer, 2004).

Kolen and Brennan (2004) defined test equating as the statistical process used for adjusting scores obtained from test forms so that these scores can be used interchangeably. Crocker and Algina (1986) have defined test equating as a process that establishes equivalent scores from two different measurement instruments; they pointed out that when the percentiles corresponding with the X and Y scores obtained from different tests that have equal reliability and measure the same construct are equal, the tests that these X and Y scores were obtained from are equal. Angoff (1984) defined test equating as the process of converting the system of units from one test form to the system of units of another test form, pointing out that scores obtained from different forms are equated after the scores are transformed. Consequently, test equating emerged due to the fact that two or more tests forms which measure the same content and construct can produce different scores for the same examinees.

Certain requirements must be satisfied to equate two test forms. There are many different views related to these requirements in the literature. Hambleton and Swaminathan (1985) listed these requirements as the properties of symmetry, same specifications, equity, and group. With symmetry, when transforming the scores from Form X to Form Y, the inverse of this transformation process should also be valid (Kolen & Brennan, 2004). According to the property of same specifications, the test forms to be equated are required to have the same content and statistical properties. The scores obtained from an equation which ignores these statistical properties cannot be used interchangeably (Kolen & Brennan, 2004). In the property of equity of equating, as proposed by Lord (1980), indifference towards whether Form X or Form Y is administered to the examinees must be claimed. However, this property holds for when test forms are identical. When identical forms are constructed, it is not necessary to equate forms (Crocker & Algina, 1986; Kolen & Brennan, 2004). When claiming the property of group invariance, equating test forms will be independent of examinee group; it does not matter which group is chosen for calculating the equating function between the scores from Form X and Form Y (Kolen & Brennan, 2004; Öztürk & Anil, 2012). …

Search by...
Show...

### Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.