Academic journal article Journal of Applied Research in the Community College

The Use of Measurement Tools in Institutional Research

Academic journal article Journal of Applied Research in the Community College

The Use of Measurement Tools in Institutional Research

Article excerpt

This article describes four measurement tools that are of potential value for institutional researchers as greater demands are being placed upon their work. The author describes scale development, select methods for setting passing scores, validating passing scores and the topic of equating - both equipercentile and linear. Not only should institutional researchers and other higher education and community college professionals become more familiar with these techniques, it may be beneficial for them to become well-versed in their use.

(ProQuest: ... denotes formulae omitted.)

Background

Increasingly, institutional researchers must conduct research in a number of different methodological areas in order to address the problems of their colleges and meet the data requirements of their institutions. These methodological areas include (1) descriptive statistics for completing internal and external forms on characteristics such as enrollment, graduation, transfer, accountability, outcomes assessment, student and faculty demographics, attrition and retention; (2) descriptive and correlational methods used in survey research for assessing various needs of the institution (e.g., marketing surveys for new and existing programs, faculty evaluation, student engagement and satisfaction surveys); and (3) theoretical applications of descriptive and inferential statistics used in more longrange research projects that attempt to determine the factors that underlie problems of retention, transfer, and student learning and development. Depending on how an institution is organized, the institutional researcher may also be called upon to serve in the role of psychometrician. This article is devoted to an overview of four aspects of measurement that may be useful to the institutional research (IR) practitioner, including developing new scales, establishing passing scores for tests, validating passing scores, and equating tests. These methodological approaches will be presented with application examples from Rockland Community College and Educational Testing Service.

Developing New Scales

When a new Pluralism and Diversity course was developed at Rockland Community College, with the aim of promoting desired changes in students, instruments had to be developed and administered to students to assess whether the intended changes had occurred. Such instruments are often used in a pretest-posttest control group research design to determine whether the desired measured changes for the constructs in question were statistically significant. (The Pluralism and Diversity course is a required course at Rockland for all associate in arts degree and associate in science degree students, with a course enrollment of 1,057 students in total for the fall 2008 semester.) The question then arises: What constructs should such an instrument be measuring? To obtain the answer to this question, it is necessary to develop an instrument using principles of psychometrics and scale development.

The first step in scale or instrument development is to write items according to specifications that list the domains to be measured by the intended construct(s) on which students are expected to change. Item construction can be theoretical or atheoretical. In other words, an existing theory can be used to generate the items, if such a theory exists, or the items can be written directly to correspond to the domains of interest. The items can also come from some pre-existing scale measuring the same constructs. The items are generated until a pool of items has been created that exhausts all conceivable facets of the domain. This is the essential idea behind domain-referenced measurement. With simple domains, all conceivable facets can be listed and a sample of representative items can be drawn. With more complex domains, this goal is more difficult to achieve.

Once the items are written, they are subject to review by experts in the field. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.