Factor Analysis for Items Scored in Two Categories
Lori D. McLeod
Research Triangle Institute
Kimberly A. Swygert
Law School Admission Council
University of North Carolina at Chapel Hill
The process of test development includes checks of validity, reliability, and internal consistency for a set of items selected to measure a desired construct. Factor analysis may provide evidence that a set of test items really measures the one (or more) proficiency (or proficiencies) for which the items were designed, by providing a description of the underlying structure of the test.
Factor analysis suggests the constructs (or factors) that a group of items have in common by finding patterns in the covariation among the item scores. These patterns are then used to judge the extent of internal consistency for a group of items. For example, if items in a set measure the same aspects of proficiency, then all of the covariation within a group of student responses should be explained by the students' scores on this one proficiency factor. Then, if all of the items are found to measure one construct, a single score may be given to represent a student's ability level. Otherwise, if more than one factor is needed to explain the covariation among the items, alternative scoring (such as might be obtained by constructing subscales) must be considered.
The Relation of Factor Analysis to Curricular Objectives. Curriculum specialists often decide, based on the item content, which items are more appropriate for a test of academic achievement and which items do not belong.
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Book title: Test Scoring. Contributors: David Thissen - Editor, Howard Wainer - Editor. Publisher: Lawrence Erlbaum Associates. Place of publication: Mahwah, NJ. Publication year: 2001. Page number: 189.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.