differences that enters into many aspects of human behavior besides those narrowly conceived of as scholastic or intellectual.
Jensen, 1980a (Chapter 8); Jensen, 1993a. The most detailed and advanced treatments of validity are by Cronbach ( 1971) and Messick ( 1989).
Another way of conceptualizing the meaning of a validity coefficient (rxc) is in
terms of the following formula:
rxc = (T - R) / (P - R),
where T is the average level of performance on the criterion for persons selected with
the test, R is the mean level of criterion performance for persons selected at random,
and P is the mean criterion performance for perfectly selected persons, as if rxc = 1.
Hence rxc is a direct measure of the proportional gain in the mean criterion performance
that results from the use of the test for selection as compared to what the mean level of
criterion performance would be with random selection. In other words, the validity coefficient is a direct indicator of the effectiveness of the test's predictive accuracy, such
that, for example, a validity coefficient of .50 provides just half as accurate prediction
as a validity coefficient of 1.00, which indicates perfect prediction. Even a quite modest
validity coefficient has considerable practical value when a great many binary (i.e., yes-
no, pass-fail, win-lose) decisions are made. For example, the casino at Monte Carlo reaps
large sums of money every day from its roulette games, because of course the house
always has better odds for not losing than the gamblers have for winning, yet the house
advantage is, in fact, equivalent to a predictive validity coefficient of only +.027! The
practical value of a validity coefficient of +.27, therefore, is certainly not of negligible
value where a large number of selection decisions must be made.
The standard error of estimate (SEest) is related to the validity coefficient (rxc) as
follows: SEest = sc (1 - rxc2)½, where sc is the standard deviation of the criterion measure.
The ratio SEest/sc measures the proportional error of predicting individuals' point values
on the criterion. The percentage gain in accuracy of point predictions as compared with
purely random selection is equal to 100(1 - SEest/sc), which is termed the index of
forecasting efficiency. The use of this index is now in disfavor when the overall value
of test-based selection is being considered, because the index of forecasting efficiency is
not directly related to the overall mean gain in criterion performance afforded by test-
based selection. The validity coefficient itself, however, is a direct indicator of the proportional gain in mean criterion performance of individuals who were selected by means
of a test with a certain validity coefficient (see Note 2). The common habit of squaring
the validity coefficient to obtain the proportion of variance in the criterion accounted for
by the linear regression of criterion measures on test scores, although not statistically
incorrect, is an uninformative and misleading way of interpreting a validity coefficient
for any practical purpose.
There are several types, definitions, and statistical criteria of test bias. For a comprehensive discussion, see
Jensen, 1980a, Chapter 9.
Some psychologists distinguish between two types of knowledge: declarative
knowledge, which is knowing about something (e.g., Fe stands for iron in the periodic
table of elements; Plato wrote The Republic; yeast is used in making bread), and procedural knowledge, which is knowing how to go about doing something (e.g., trouble
Questia, a part of Gale, Cengage Learning. www.questia.com
Book title: The G Factor:The Science of Mental Ability.
Contributors: Arthur R. Jensen - Author.
Place of publication: Westport, CT.
Publication year: 1998.
Page number: 301.
This material is protected by copyright and, with the exception of fair use, may
not be further copied, distributed or transmitted in any form or by any means.