Academic journal article Informing Science: the International Journal of an Emerging Transdiscipline

Using the ASSIST Short Form for Evaluating an Information Technology Application: Validity and Reliability Issues

Academic journal article Informing Science: the International Journal of an Emerging Transdiscipline

Using the ASSIST Short Form for Evaluating an Information Technology Application: Validity and Reliability Issues

Article excerpt


The Library of Crop Technology is a collection of web-based learning objects developed initially at a large mid-western American university with help from other universities and funding agencies to provide peer-reviewed, unbiased science-based information about biotechnology and other plant science topics to students in either resident or distance courses, participants in extension or outreach activities, and members of the public. University instructors, community college teachers, adult educators, high school science and agriculture teachers, agronomists, seed company sales representatives, industry trainers, crop consultants, journalists, dieticians, and nutritionists have used it. It has now been expanded and renamed the Plant and Soil Sciences eLibrary (Namuth, Fritz, King, & Boren, 2005).

Formative evaluations of the lessons included a volunteer group of students interested in crop genetic engineering and students in two semesters of an introductory genetics course (Hain, 1999). Students were interviewed and asked open-ended questions about their learning strategies relative to six features of the lessons: objectives, text, images, animations, glossary and quizzes.

As the database of lessons and topics grew, the developers' interest in self-reported strategies evolved into a research program including students at several universities who were using the learning objects library. This research program made it possible to assemble a larger and more heterogeneous sample and, as a side effect, get a better sense of the reliability and validity of the ASSIST scales. Those results are reported here.

Validity and Reliability

Two indicators of quality for any kind of mental measurement are validity and reliability. According to Allen and Yen (1979, p. 95), "A test has validity if it measures what it purports to measure," and "Validity can be assessed in several ways, depending on the test and its intended use." Richardson (2004, p. 353) stated that, "Any research instrument should be validated from scratch in each new context in which it is used." According to the StatsDirect Limited (2006b) online dictionary of statistics terms, located at, for laboratory experiments with tightly controlled conditions, it is easier to achieve high internal validity than for studies in difficult to control environments (like classrooms).

The other hallmark of quality in the measurement world, reliability, means that the test or questionnaire measures what it claims to measure consistently, either in terms of consistency over time, or that the items combined to produce scores have high enough positive inter-item correlations to produce meaningful scores. A coefficient of reliability can be calculated based on various formulas (Allen & Yen, 1979, pp. 72-92). Lee Cronbach of Stanford University developed one commonly used formula for calculating the internal consistency of items on a scale, called Cronbach's alpha or [alpha]. According to StatsDirect Limited (2006a), located at, by convention, an alpha of 0.80 is considered adequate for many purposes. But the adequacy of a coefficient of reliability depends on the type of scale and the purposes for which the scores are being used. One example, from outside the approaches to studying literature, comes from Schott and Bellin, (2001, p. 88) who were interested in questionnaires measuring self-concept, and who considered reliabilities of 0.69 to 0.77 acceptable.

Koohang (2004) reported development of a new instrument to facilitate evaluation of the usability of digital libraries. He addressed the issue of construct validity by doing a principal components factor analysis of the items, and found his items all addressed one trait. The internal consistency of his scale was 0.96. This study describes a similar process, but the ASSIST had a longer history, with adaptations for new purposes and modifications to widen its theoretical base. …

Author Advanced search


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.