Academic journal article Kuram ve Uygulamada Egitim Bilimleri

Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

Academic journal article Kuram ve Uygulamada Egitim Bilimleri

Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

Article excerpt

The validity of measuring devices used in education is one of the most important topics of the measuring device development process. Validity concept is a criterion for the fact that it serves as a measuring device (Croker & Algina, 2008; Downing & Haladyna, 2006; Kane, 2006). In other words, identifying the degree of an expected structure and of an observed structure is the structural validity of a test (Baykal, 1994). Thus, the validity of a measurement is directly proportionate to the purpose being measured by the device. Therefore, validity is not a concept to be considered independent of purpose and therefore a set of evidences should be collected.

Validity approach according to the purpose of measurement is generally discussed in 3 groups: content, criteria and structural validity (Brualdi, 1999; Erku?, 2003; Hopkins, 1998). Content validity is related to the fact that the items to be tested represent the structure to be measured. In criterion supported validity, the relationship between points from one test and points from another test are taken as criteria to be examined. Structural validity is the degree to which significant organizational or psychological structures are represented.

The validity of measuring devices, test items, and accordingly the measurements used in education is one of the basic problems with the impartiality of measurement areas. As is known, one of the primary purposes of measurement applications in education is to obtain information about individuals or test items. Therefore, flawless measurement devices/ results are required. The validity of a measurement devices' results should be high. However, one of the factors which affect validity negatively is a "biased" item. The fact that a test includes biased items will undoubtedly destroy an evaluations' credibility and limit its ability to be carried out in accordance with the results of the test. The impartiality of items is detected through a set of psychometric procedures in accordance with the test theory (Camilli & Shepard, 1994; Holland & Wainer, 1993; Millsap & Everson, 1993; Raju & Ellis, 2002; Zumbo, 1999).

Stuck (1995), in his study, proposed that especially both measurement mistakes and biased items are among the factors which destroy a structure's validity. Validity problem is a degree of sufficiency, therefore he proposed feasibility validity instead of construct feasibility.

According to Messick (1995), in educational and psychological measurements, six distinguishable features were emphasized for validity: content, substance, structure, ability to generalize, externalization and consequence validity. All these features have been evaluated as evidence for collecting information to validate a study.

In order to identify the "construct" validity of a measurement device, factor analysis is applied for a validity study (Croncbach & Meehl, 1955). As is known, grouping dependent on the correlation of the points observed is carried out. This grouping is related to the items within the factor analysis measuring device. Thus, structure(s) in which related items gravitate to measuring may come into being. However, factor analysis is discussed as "exploratory factor analysis" and "confirmatory factor analysis" in itself (Pohlmann, 2004; Stapleton, 1997)

Groupings dependent on the correlation concerning the scoring of items are classified as "exploratory factor analysis." Therefore, the constructs to be put forth together with "exploratory factor analysis" is also called "statistical constructs" in some sources (Knight, 2000; Pohlmann, 2004; Stapleton, 1997). In confirmatory factor analysis, item-construct relations based on theory are tested instead of the scores of the items. Thus, in confirmatory factor analysis, the construct to be approached is also called a "psychological construct" (Knight, 2000; Pohlmann, 2004).

Guilford, who termed construct validity, factorial validity or validity concepts for the first time 60 years ago stated that the answer to the question: "Does a test measure a desired expected construct? …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.