The Construct Validity of Differential Response Latencies in Structured Personality Tests

Article excerpt


This research demonstrates that differential latencies for responding to persona lity test items contain interpretable information. In particular, we show that differential res ponse latencies are a meaningful indicator of the presence of a trait. A total of 92 subjects respo nded to a series of microcomputerized personality test items reflecting four different traits on each of four occasions. Estimates of internal consistency, parallel forms reliability and te st - retest stability suggested that the reliability of the response latencies was modest. Nonetheles s, differential response latencies showed excellent convergent validity for corresponding trait level measures, such as scale scores, self - ratings and peer ratings, and excellent discriminan t validity for irrelevant trait level measures. Moreover, as predicted, the latencies for endo rsing trait relevant items were negatively related to trait level measures whereas the latencies for rejecting items were positively related. Differential response latencies had no tendency to gro up together as a method factor. Rather, the pattern of convergent and discriminant relationships generalized across all four retest sessions. These results support the interpretation of dif ferential latencies for responding to test items as a construct valid way of assessing the strength of a trait.


Cette recherche montre que les latences differentielles de reponse a des items d e tests de personnalite renferment de l'information interpretable. Plus precisement, no us demontrons que ces latences sont un indicateur important de la presence d'un tra it de personnalite. Au total, 92 sujets ont repondu a une serie d'items de tests de personnalite sur micro - ordinateur visant a faire ressortir quatre differents t raits a chacune des quatre seances de tests. Les estimations relatives a la constance i nterne, a la fiabilite des formes paralleles et a la stabilite de la technique tests - ret ests portent a croire que la fiabilite des latences de reponse est modeste. Pourtant, les la tences de reponse differentielles revelent une excellente validite convergente dans le cas des mesures quantitatives des traits pertinents, telles que les resultats obtenus d' apres echelle, les auto - evaluations et les evaluations parles pairs, et une excelle nte validite discriminante dans le cas des mesures quantitatives des traits non pert inents. Par ailleurs, comme predit, les latences dans le cas ou les sujets adherent a des it ems correspondant a des traits pertinents sont negativement reliees aux mesures quan titatives des traits, tandis que les latences dans le cas ou les sujets rejettent certains items sont positivement reliees. Les latences de reponse differentielles n'ont pas tendanc e a former un facteur de methode. Plutot, on remarque des liens generalises de convergence et de discrimination tout au long des quatre seances de retests. Ce s resultats confirment l'interpretation selon laquelle les latences differentielles de repon se a des items de tests constituent un moyen conceptuel valide d'evaluer la force d'un tr ait.

Anyone who has ever administered a structured personality test will know that so me respondents take an exceptionally long time to respond to certain test items. From a neomen talistic perspective, the latency associated with making judgements about the self has be en considered a behavioural representation of a cognitive process (Paivio, 1975). Test item r esponse latencies, thus, may be interpretable within a cognitive framework. With the advent of mic rocomputerized testing, collecting individuals' item response latencies is a small matter. Unf ortunately, response latencies constitute very messy data (Fazio, 1990). Whether response latencies to structured personality test items, in fact, can be interpreted as a meaningful individual d ifference variable is unclear from the existing literature. The purpose of this paper is: a) to de monstrate empirically that differential, as opposed to raw, latencies for responding to st ructured personality test items have construct validity for assessing the presence of a trait; and b) to argue that differential response latencies may indeed be understood within a cognitive psyc hological framework.

What is the meaning of a latency for responding to a personality test item? Early research dismissed the validity of test item response latencies finding that response lat ency variance was largely attributable to item properties such as length (Dunn, Lushene, & O'Neil, 1972) and social desirability (Follette, 1984). Other researchers argued that test item r esponse latencies simply represented differences in motor speed that are logically associated with different personality dimensions. For example, one feature of mania or impulsivity is spe ed; conversely, slow movements and slow decision making are aspects of depression. The evidence for this motor speed hypothesis is mixed: Holden and Hickman (1987) showed that high scor es on the Speed and Impatience scale of the Jenkins Activity Survey were negatively correl ated with latencies for responding to the items on that scale; Davis (1979) was unable to show that depressed people responded to general personality information more slowly than n ondepressed people.

A more fruitful line of research for understanding item response latencies has adopted a cognitive psychological framework. That is, the latency of responding to a pers onality test item may reflect the presence of an integrated network of self - knowledge, called a schema. People with a schema for a particular personality dimension process information relevan t to that dimension faster than people lacking such a schema. Markus (1977) showed that, r elative to aschematics, people with a schema for independence responded more quickly to adj ectives related to independence. Similarly, Mills (1983) showed that sex - typed males and females had shorter response latencies for sex - congruent than for sex incongruent words; b alanced individuals showed no differences in the speed of responding to these two types of words. Thus, the availability of well - organized information about the self facilitates proc essing relevant personality information.

Despite its promise, not all studies find empirical support for the schema approach. Dobson and Shaw (1987) found no differences in the response latencies of depressed vers us nondepressed people to 32 adjectives tapping depression. Similarly, Mervielde (1988) found t hat raw latencies for responding to personality adjectives reflecting masculinity and femininity w ere unrelated to sex - typing based on these adjectives. Krug, Keefe, and Ahadi (1986) collected item response latencies for a whole variety of personality characteristics and found no correl ation between the sum of the raw latencies for responding to the items on a scale and total score on the scale. Like Mervielde, they concluded that test item latencies are not related to perso nality in any meaningful way.

Such a pessimistic conclusion may be premature, however. Raw test item res ponse latencies are probably uninterpretable because they contain many sources of variance. One such source of variance is associated with the respondent's specific decision. That is, whe n a personality test item is compared to the schema, the organization of the considerable information in the schema should facilitate the decision that the item is consistent with the schema. Con versely, schema presence should make the decision to reject an item much longer as a large amoun t of information is examined in search of a schema - item match. The different effec t of schema presence on endorsing versus rejecting an item has been shown in various studies .

Both Kuiper (1981) and Mueller, Thompson, and Dugan (1986) showed that indi viduals who gave themselves high or low ratings on adjectives responded more quickly than in dividuals who gave themselves moderate ratings. Extreme self - ratings presumably reflect sel f - congruent decisions: endorsement of the adjective by schematics and rejection by aschemati cs. Erdle and Lalonde (1986) generalized this phenomenon to true - false personality test item s. They showed that the latency for endorsing personality test items was negatively related to scale score (mean r = - .14) whereas the latency for rejecting test items was positively related t o scale score (mean r = .14). As predicted, schema presence made item endorsement faster and item r ejection slower.

Erdle and Lalonde recognized that, because respondents would endorse and re ject different subsets of items, they would need to control for the contribution of item charac teristics to response latencies. This is not enough, however; raw item latencies are also co ntaminated by person characteristics. Holden, Fekken, and Cotton (1991) have proposed that th e process of responding to test items incorporates both item and person factors. Consider th at responding to an item may involve these stages: 1) encoding the item; 2) comprehending the item; 3) comparing the item to the self; and 4) selecting a response (Rogers, 1974). Thu s, the latency for encoding the item may be affected by an item property such as length, but it should also be affected by person factors such as reading speed. Similarly, the latency for com prehending the item may be affected by both the ambiguity of the item as well as the respondent 's level of verbal ability. Adjusting raw latencies for item and for person factors results in a differential response latency that reflects the interaction between item and person factors, in particular, the interaction between a schema relevant item and the schema itself. Fekken and Ho lden and their colleagues routinely control for the effects of irrelevant item characteristics as well as irrelevant person characteristics. Thus, they have demonstrated a pattern of relationships even stronger than Erdle and Lalonde's (1986) between the respective latencies for endorsing a nd for rejecting items with scale scores on tests of normal personality [Fekken & Holden, 1992 (m ean r = - .32 and .28)]; of children's personality [Simola & Holden, 1989 (mean r = - .45 and .48)]; and of psychopathology [Holden & Fekken, 1987 (mean r = - .14 and .33); Holden et al., 1991 (mean r = - .15 and .31); Popham & Holden, 1990 (mean r = - .27 and .31)].

Differential response latencies clearly show predictable relationships to p ersonality scale scores which may be understood in terms of schema theory. When endorsing an ite m, the presence of an elaborate, well organized schema is reflected in a short differen tial response latency; when rejecting an item, the presence of that same elaborate, well organ ized schema is reflected in a long latency. Despite these promising data, any conclusion that test item response latencies are meaningful individual difference variables demands further demonst ration that latencies have certain properties, supporting the argument that latencies have c onstruct validity for tapping the strength of a trait. In this study, we hypothesized that differ ential response latencies will demonstrate: a) reliability; b) convergent validity for schema re levant scale scores, self - ratings, and peer - ratings; c) discriminant validity for schema - irrele vant scale scores and ratings; and d) temporal stability with regard to their pattern of convergent an d discriminant relationships.



The initial sample was comprised of 17 pairs of male roommates and 33 pairs of f emale roommates, all of whom were fulltime students at a medium - sized Canadian unive rsity. They ranged in age from 17 to 27 years (mean = 20.07, standard deviation = 1.82). Su bjects were paid for their participation in four successive testing sessions. Subjects rece ived $5 at the end of the first testing session; $10 at the end of the second; $15 at the end o f the third; and $20 at the end of the fourth and final testing session. Of the origi nal 100 subjects, 92 completed all sessions.


Five traits were the focus of this study: ambitiousness, sociability, cautiousne ss, warmth, and orderliness. These traits were measured using the Achievement, Affiliation, Har mavoidance, Nurturance, and Order scales from the Personality Research Form (PRF; Jackson, 1 984). The PRF is a true - false measure selected because of its clear construct definition s, homogeneous scales, strong psychometric properties, and adaptability to the computer (Fekken & Holden, 1989). Forty PRF items per trait, 20 from each of the parallel PRF Forms AA and BB, were presented in a microcomputerized format, allowing the collection of scale scores as well as item response latencies. Two other measures of subjects' position on each of the fiv e traits were also obtained. Self - ratings were made using a seven - point scale on adjectives th at had previously shown good validity for the PRF scales (Fekken & Holden, 1989). Adjectives were : ambitious, sociable, risk - taking (keying reflected), warm, and orderly. Peer - ratings w ere collected using the sets of 16 true - false items from Form - E of the PRF comprising the Achiev ement, Affiliation, Harmavoidance, Nurturance, and Order scales.


Pairs of same - sex roommates were recruited by placing posters around campus an d by advertising in the campus newspaper. Subjects were assessed twice in each of th e two regular academic terms, with approximately 5 weeks separating the Time 1 and Time 2, 17 weeks separating Time 2 and Time 3 and 6 weeks separating Time 3 and Time 4.

Subjects were scheduled to arrive at the Personality Assessment Laboratory with their roommates. In each session, they worked simultaneously in separate testing room s. They always responded to the computerized items first, followed by the self - ratings , and then the roommate ratings. Personality test items comprising a computerized questionnair e or a rating form were always presented in the same fixed order across the four testing sessi ons.

Subjects were familiarized with a Zenith microcomputer with a mono - chrome monitor. They answered 10 practice items, during which time the experimenter was availabl e to answer any questions. Then, working alone, they responded to the 200 PRF items. Items were presented sequentially; subjects could press either the "T" or the "F" key to in dicate a "True" or a "False" response, respectively. Pressing either key caused the screen to b e cleared and the next item to be presented. Subjects' only other response option was to revise th eir answer to the immediately preceding item by pressing the "R" key to "Redo" the item. Subjects were not explicitly told that response latencies were being collected.

After the computerized assessment, subjects completed self - ratings on the five adjectives and responded to the 80 PRF Form - E items in the way that they believed their r oommate would respond to these items.


Preliminary Analyses

The five, 40 - item PRF scales showed strong psychometric properties in all four testing sessions. Means and standard deviations were comparable to published data for s tandard paper and pencil versions of the Personality Research Form (Jackson, 1984). Internal consistencies were all as high as might be expected for 40 - item scales; coefficients alpha r anged from .87 to .96. Parallel forms reliability, calculated by correlating total scores on t he 20 items making up PRF Form AA with total scores on PRF Form BB, was similarly high; uncorrected reliabilities ranged from r = .80 to r = .95. Finally, test - retest reliabilit ies were excellent, ranging from r = .77 to r = .97.

Calculation of the Differential Response Latencies

As a preliminary step, item response latencies were trimmed to deal with a very small number of outliers (i.e., less than 0.05 percent of the data). Consistent with our pre vious work (Fekken & Holden, 1992), raw item latencies that were smaller than .5 seconds or exceede d 40 seconds were set to .5 and 40 seconds, respectively. Next, latencies were standardized twice. First, raw response latencies were standardized within individuals to remove a large genera l factor due to individual differences in response speed. Second, these standardized response l atencies were standardized once again within each item across individuals. This standardizati on adjusted the latencies for factors associated with item differences, such as item length, amb iguity, etc. Together, the two standardizations yielded a differential response latency that reflects the relative difficulty of a particular item for a particular person. Subsequently, any outl iers that still had - z - scores of greater than - 3.00 or + 3.00 were set to these values. Adj usting outliers is frequently performed to reduce any statistical artifacts in a response latenc y distribution and is usually preferable to the data loss engendered by dropping outliers (see Fazi o, 1990).

The mean response latency for endorsed items and the mean response latency for rejected items were calculated separately for each individual on each of the five PRF sca les. In those instances where a respondent endorsed or rejected all items, we assigned a missi ng value for the corresponding mean latency estimate. Only about 3 to 5 per cent of the responde nts endorsed or rejected all 40 items on a scale. A composite mean response latency was also calculated for each scale. PRF scales are bipolar, such that saying "True" to "I am a messy pe rson", for example, is treated as psychologically equivalent to saying "False" to "I am a n eat person." In other words, endorsing a protrait item is the logical obverse of rejecting a con trait item. Moreover, the test constructor's decision to make one or the other pole of the t rait the emergent pole (e.g., to refer to the trait as "messiness" versus "orderliness") is arbitr ary. The advantage of combining the latencies for these logically inverse processes is that it yiel ds a composite that should be a better estimate of the true score than either of the mean latencies for endorsing or for rejecting items alone. To calculate the composite mean response latency we simply subtracted the mean response latency of the rejected items from the mean respons e latency of the endorsed items. We also assigned a missing value on the composite mean resp onse latency to the 3 to 5 per cent of people who endorsed or rejected all 40 items on a give n scale. This practice is consistent with the way in which mean response latencies for endorse d and rejected items were handled; it involves relatively few cases; and it avoids the assumpti on that the mean response latency for endorsing or rejecting items alone is an appropriate substi tute for the composite latency.

Reliability of the Differential Response Latencies

Three estimates of reliability were examined for the composite mean latencies fo r each of four times, namely, internal consistency, parallel forms reliability and test - retes t stability (see Table 1). Parenthetically, parallel forms and test - retest reliabilities were also ex amined separately for the mean latency for endorsing items and the mean latency for rejecting items. Reliabilities for the mean latencies for endorsing and for rejecting items did not differ apprecia bly from one another and in fact tended to be marginally lower than the reliability estimates for the composite latencies. Coefficient alpha could not be calculated for the mean latencies for endorsing and for rejecting items because subjects endorsed and rejected different subsets of item s. Below we discuss the reliability estimates for the composite mean latency only.

To calculate internal consistencies, we first multiplied each individual's specific item latencies by 1 if he/she had endorsed an item and by - 1 if he/she had rejected an item to ensure that all items on a scale were keyed in the same direction. Then, for each of t he five scales, coefficient alpha was calculated across all subjects using the 40 doubly standar dized response latencies associated with the relevant scale. Across the four testing times, co efficients alpha were quite modest, ranging from .13 to .58, with a mean of .34.

The next reliability estimate was based on the PRF parallel forms. Within e ach testing time, the composite mean latency was calculated separately for the 20 items that compr ise Form AA and for the 20 items that comprise Form BB. The two composite mean latencies we re simply correlated to get an estimate of parallel forms reliability. These reliability estimates were surprisingly weak. Exactly half of them were nonsignificant. The overall mean across the four testing times was .17 (range - .04 to .34). (Note that whenever we report a mea n correlation we followed this procedure: The original correlations were transformed from r to z using Fisher's formula; the mean z was calculated; and this mean z was transformed bac k to an r for the sake of interpretability.)

Finally, retest stability was evaluated by intercorrelating the composite m ean latencies for each scale across all possible combinations of the four testing times. This yie lded a total of six retest reliabilities for each of five traits. These reliabilities were somewhat stronger than the parallel forms reliabilities, ranging from .06 to .60, with a mean of .34.

Validity of the Differential Response Latencies

The first set of analyses was conducted within each of the five traits. The mean latency for endorsing items, the mean latency for rejecting items, and the composite mean la tency were correlated with the total scale score, the self - rating, and the peer - rating criteria. To evaluate discriminant validity, the mean composite latency for a PRF scale was correlated with the trait level measures for each of the four irrelevant scales and the average correlatio n from these four irrelevant scales for each of the total scale scores, the self - rating, and the peer - rating was computed. Results for all four sessions are presented in Table 2.

A consistent negative relationship between trait level measures and corresp onding latencies for endorsing trait - relevant items emerged for all four sessions. These relat ionships, averaged across the four traits, are illustrated in Figure 1. Although the correlations tended to be slightly stronger for scale scores than for self - or peer - ratings in each of the sessi ons, attributing the item to the self quickly was nonetheless clearly related to obtaining higher sco res for all three operationalizations of the relevant trait. In a parallel fashion, trait level m easures had consistently positive relationships to the trait relevant latencies for rejectin g PRF items across all four sessions (again see Table 2). The size of the average correlations acr oss the five traits ranged from .15 to .38. As predicted, high scores on a trait were associated wi th being slow to reject trait - relevant items as not self - descriptive.

The complementary nature of the relationships of response latencies for end orsing and rejecting items to trait level measures supports our strategy of combining laten cies into a more reliable composite. For the composite latency, correlations with scale scores, self - ratings, and peer - ratings ranged from - .24 to - .52 across the four sessions. These relati onships are all significant and contrast strongly to the overwhelmingly trivial relationships of the composite to measures of level on irrelevant traits. Thus, latencies do appear to possess di scriminant as well as convergent validity.

A second set of analyses was conducted to evaluate directly the convergent and discriminant validity of the composite mean latencies in the context of all of the trait leve l information for all five traits. Separately for each of the four testing times, a multimethod fa ctor analysis (Jackson, 1975) was conducted across the five traits by four methods (i.e., scal e score, self - rating, peer - rating, and composite mean latency). The calculation of principa l components loadings was followed by the extraction of five factors and rotation to a matrix that targeted the scale score, self - rating, peer - rating and (negative) composite mean latency together onto an orthogonal factor for each of the five traits. Results from the first testing t ime are presented in Table 3. Clearly, different methods of assessing the same trait load together. Of particular interest is the fact that the composite mean latencies each load with the other relevant trait measures. As anticipated, the composite mean latency for a trait shares varianc e with other indicators of individuals' levels on that trait. The results of the multimethod factor analysis for Time 1 are highly representative of the results found for the other three testin g times. Coefficients of congruence among the five corresponding trait factors obtained f rom the multimethod factor analyses conducted at each of the four times are uniformly hi gh (range .84 to .99, mean = .95).


This study supports the view that differences in the relative amount of time ind ividuals spend responding to a personality test item are meaningful. Long differential respons e latencies tend to be produced by respondents who reject a specific item as nondescriptive, even though the overall trait reflected in the item does describe them. Conversely, short laten cies tend to be given by individuals who endorse items that tap traits which are self - descript ive. At one level, then, outstanding differential item response latencies signal the presence of a trait. If we further adopt a cognitive perspective, namely that trait level represents a schema which contains well - organized, domain - specific information about the self (Fiske & Taylor, 1984), then differential item response latencies may be a behavioural index of the item response process. The presence of the schema facilitates the decision that a schema relevant item is congruent with the self but inhibits the decision that a schema relevant item does not describe the self. O ur data can be contrasted with Anderson's (1990) demonstration of a "fan effect". Anderson has shown that increasing the amount of information in a schema may actually slow down the proc essing of schema relevant information. However, Anderson was not explicitly working with s elf - information. A self - schema may simply be better organized than other cognitiv e schema, for example, because self - schemata are more frequently assessed or because the com ponents of a self - schema have a certain emotional valence. Nonetheless, our interpretation of differential latencies as indices of the item response process is based on correlational data . Further research within an experimental paradigm is mandatory.

Is it fair to say that differential response latencies have construct valid ity as indicators of the presence of a trait? Consider first the reliabilities of the differential respo nse latencies. The internal consistencies provided evidence for modest homogeneity among latencies. Note however how the calculation of coefficient alpha for response latencies differs from that calculation for ordinary item responses. In the case of item responses, the sam e items are keyed true or false for all subjects. With response latencies, it is not the keying o f the item but rather whether a subject endorses or rejects the item that is essential. Thus, each su bject will potentially have different items that get a positive or negative weight as coeff icient alpha is determined. If the keying of an item and the probability that a subject will en dorse or reject that item interact in some complex manner, then our estimates of the internal consist ency of item response latencies can be expected to be attenuated. Differential response late ncies also showed some measure of test - retest stability, indicating that respondents tended to h ave similar differential latencies for responding to items across time. These reliability e stimates are not as strong as those obtained for the scale scores based on the same items. Latencie s clearly contain error variance, perhaps because they are more susceptible than item responses to the effects of momentary distractions, fatigue or boredom.

The estimates of reliability for the composite differential latenciesbased on the parallel sets of 20 items administered in the same session showed weak relationships even thou gh scale scores based on these same two item sets were strongly related. This is a disappointin g result because the parallel forms reliability of the latencies is the most conceptually and met hodologically straightforward reliability analysis of the three. One contributing factor may have been the somewhat smaller sample sizes on which the reliability estimates were based beca use of our decision not to estimate the composite latency for respondents who endorsed or r ejected all 20 items on one of the parallel forms. Although most parallel forms reliability es timates were based on 80 to 90 respondents, samples were occasionally as small as 69 responde nts. Another factor may have been that 20 items are too few on which to base a mean endorsed or rejected latency estimate. Indeed, the very reason for using 40 homogeneous items in thi s study was to reduce the probability that estimates for people who are extreme on the trait wo uld have to be based on a very small number of items. Stepping up the parallel forms reliabili ty using the Spearman Brown formula illustrates how the mean reliability of the latency estim ates increases from .17 to .29 as we move from 20 to 40 items. The use of estimates based on 4 0 items probably helped to mitigate the effects of low reliability on the validity of th e differential response latencies. Nonetheless, further research on the explanation for the mo dest reliabilities and on methods for improving the reliability of differential latencies is needed .

Despite the modest reliability of the differential response latencies, thei r convergent and discriminant validity was extremely good. Differential response latencies for e ndorsing and rejecting items showed a consistent pattern of negative and positive relationshi ps across personality scale scores, self - ratings, and independently collected peer - rat ings. The multitrait multimethod factor analysis demonstrated further that differential response late ncies had discriminant validity; that is, response latencies did not simply group together but were rather associated with their conceptually relevant traits. We can rule out that latenc ies simply represent a method factor. Finally, the evidence for convergent and discriminant validity was stable over the four testing sessions. These results are in clear contrast to work showing no relationship of item response latencies to trait presence (e.g., Krug et al., 1986; Mervielde, 1 988). We argue that differential response latencies do contain valid variance, extending the ph enomenon by showing its applicability over alternate indices of trait level and its durabili ty over time.

Of central importance, of course, is the distinction between raw response l atencies and differential response latencies. Where response latencies have been systematica lly related to trait level, a distinction between endorsed and rejected items has ordinarily been mad e (Erdle & Lalonde, 1986; Fekken & Holden, 1992; Holden & Fekken, 1987; Holden et al., 1991 ; Kuiper, 1981; Mueller et al., 1986; Popham & Holden, 1990; Simola & Holden, 1989). Wher e this distinction has not been made, the predicted relationship between schema presenc e and latencies has not always been found (e.g., Dobson & Shaw, 1987) or the results have been d ifficult to replicate. For example, Mervielde (1988) could not replicate Mills' (1983) work using sex - typed individuals and adjectives.

From an item processing perspective, the decision to attribute the content of an item to the self is likely distinct from a decision to reject it. By definition, schemata c ontain highly organized information (Fiske & Taylor, 1984). Perhaps this information is organ ized hierarchically, by levels of specificity (Ebbesen & Allen, 1979). For example, a person may perceive herself as "orderly", which subsumes being "neat" and being "punctual". Being "neat" is further associated with particular behaviours, such as cleaning her house, do ing laundry, tidying her desk, etc. For such a schematic, the short latency for endorsing an item related to "orderliness" reflects a comparison of the item to higher order information abou t the self or a search through an established network of information. Conversely, rejecting an item is relatively slow because numerous paths and pieces of information need to be searched. The a schematic is relatively quick to reject an item because a search of higher level information yields no self - item match. Endorsing an item is much slower for the aschematic than for the sc hematic because the search for a self - item match involves many bits of specific inform ation that are loosely organized. Preliminary support for such a model comes from data showing that "no" responses are slower than "yes" responses, at least when judging socially desira ble stimuli (Lewicki, 1984; Tetrick, 1989). A model of the item response process that refer red to a schema with a certain structure such as the hierarchical structure defined above might prove a fruitful avenue for future research.

The second defining characteristic of differential response latencies is th e double standardization procedure. By correcting for individual differences in latency due to reading speed, motivation, etc., and for item effects on latencies due to length, ambigu ity, social desirability, etc., double standardization yields a latency that reflects the re lative difficulty of an item for an individual. Although our data generally support the construct va lidity of differential response latencies, the reliability evidence suggests that extraneo us sources of variance still exist in differential latencies. Gross anomalies (such as a sing le raw item response latency of 700 seconds obtained for one of our subjects) might be mitigated thro ugh experimental controls. Alternative statistical procedures might also increase the proportion of true score variance in differential response latencies. For example, we could do a transfo rmation (e.g., log, square root) of the response latency data to handle outliers. This procedu re is unlikely to yield results that differ substantially from our present results because there w ere so few outliers (i.e., less than .05 per cent) and our focus was on mean latencies rather than i ndividual latencies per se, but could be quite appropriate in future studies. Certainly, extraneous sources of variance contained in personality test item latencies should not be underestimat ed and endeavours to correct for them are not only legitimate but necessary (Fazio, 1990).

In conclusion, this study provided evidence for the construct validity of d ifferential response latencies for assessing trait presence. Although the differential response late ncies had moderate reliabilities, they showed excellent convergent validity for various relevant sc hema indicators and excellent discriminant validity for various irrelevant schema indicators. In pa rticular, the latencies for endorsing schema relevant items were negatively related to schema indicators, whereas the latencies for rejecting items were positively related. There was no evidence that the differential latencies were better described as a method factor independent of the schema indicators to which they were substantively related. Moreover, this pattern of relationships was consistent across four separate testing sessions that spanned 23 weeks. Taken t ogether, the results clearly support the interpretation of individual differences in the diff erential latencies of responding to personality test items.

The authors gratefully acknowledge the assistance of the Social Science and Huma nities Research Council of Canada and the Ontario Ministry of Health. Correspondence concerning this article should be addressed to Dr. G.C. Fekken, Department of Psychology, Queen's Univer sity, Kingston, Ontario K7L 3N6 Canada


Anderson, J.R. (1990). Cognitive psychology and its implications (3rd ed.). Ne w York: Freeman.

Davis, H. (1979). Self - reference and the encoding of personal information in depression. Cognitive Therapy and Research, 3, 97 - 110.

Dobson, K.S., & Shaw, B.F. (1987). Specificity and stability of self - referent encoding in clinical depression. Journal of Abnormal Psychology, 96, 34 - 40.

Dunn, T.G., Lushene, R.E., & O'Neil, H.F., Jr. (1972). Complete automation of t he MMPI and a study of its response latencies. Journal of Consulting and Clinical Psych ology, 39, 381 - 387.

Ebbesen, E.B., & Allen R.B. (1979). Cognitive processes in implicit trait infer ences. Journal of Personality and Social Psychology, 37, 471 - 488.

Erdle, S., & Lalonde, R.N. (1986, June). Processing information about the self: Evidence for personality traits as cognitive prototypes. Paper presented at the Canadian Psy chological Association Annual Convention, Toronto, Canada.

Fazio, R.H. (1990). A practical guide to the use of response latency in social psychological research. In C. Hendrick and M.S. Clark (Eds.), Research methods in personality and social psychology (pp. 74 - 97). London: Sage.

Fekken, G.C., & Holden, R.R. (1992). Response latency evidence for viewing pers onality traits as schema indicators. Journal of Research in Personality, 26, 103 - 120.

Fekken, G.C., & Holden, R.R. (1989). Psychometric evaluation of the microcomput erized Personality Research Form. Educational and Psychological Measurement, 49, 875 - 882.

Fiske, S.T., & Taylor, S.E. (1984). Social cognition. Reading, MA: Addison - W esley.

Follette, W.C. (1984). A computer administered MMPI and a study of response lat ency and social desirability. Unpublished doctoral dissertation, University of Washingto n.

Holden, R.R., & Fekken, G.C. (1987, August). Reaction time and self - report psychopathological assessment: Convergent and discriminant validity. Paper pres ented at the American Psychological Association Annual Convention, New York.

Holden, R.R., Fekken, G.C., & Cotton, D.H.G. (1991). Assessing psychopathology using structured test item response latencies. Psychological Assessment: A Journal of Consulting and Clinical Psychology, 3, 111 - 118.

Holden, R.R., & Hickman, D. (1987). Computerized versus standard administration of the Jenkins Activity Survey. Journal of Human Stress, 13, 175 - 179.

Jackson, D.N. (1975). Multimethod factor analysis: A reformulation. Multivariat e Behavioral Research, 10, 259 - 275.

Jackson, D.N. (1984). Personality Research Form Manual (3rd Ed.). Port Huron, MI: Research Psychologists Press.

Krug, S.E., Keefe, M.T., & Ahadi, S.A. (1986). Factors influencing decision tim e in on - line personality assessment. In T.B. Gutkin & A. Elwork (Eds.), Computers in human b ehavior (pp. 1 - 14). New York: Pergamon.

Kuiper, N.A. (1981). Convergent evidence for the self as a prototype: The "inve rted - U RT effect" for self and other judgments. Personality and Social Psychology Bulleti n, 7, 439 - 443.

Lewicki, P. (1984). Self - schema and social information processing. Journal of Personality and Social Psychology, 47, 1177 - 1190.

Markus, H. (1977). Self - schemata and processing information about the self. Journal of Personality and Social Psychology, 35, 63 - 78.

Mervielde, I. (1988). Cognitive processes and computerized personality assessme nt. European Journal of Personality, 2, 97 - 111.

Mills, C.J. (1983). Sex - typing and self - schemata effects on memory and resp onse latency. Journal of Personality and Social Psychology, 45, 163 - 172.

Mueller, J.H., Thompson, W.B., & Dugan, K. (1986). Trait distinctiveness and ac cessibility in the self - schema. Personality and Social Psychology Bulletin, 12, 81 - 89.

Paivio, A.U. (1975). Neomentalism. Canadian Journal of Psychology, 29, 263 - 2 91.

Popham, S.M., & Holden, R.R. (1990). Assessing MMPI constructs through the meas urement of response latencies. Journal of Personality Assessment, 54, 469 - 478.

Rogers, T.B. (1974). An analysis of the stages underlying the process of respon ding to personality test items. Acta Psychologica, 38, 205 - 213.

Simola, S.K., & Holden, R.R. (1989, June). Validity of test item response laten cies in assessing self - concept. Paper presented at the Canadian Psychological Associa tion Annual Convention, Halifax, Canada.

Tetrick, L.E. (1989). An exploratory investigation of response latency in compu terized administrations of the Marlowe - Crowne Social Desirability Scale. Personality and Individual Differences, 10, 1281 - 1287.


Reliabilities of the Composite Response Latency Measure

PRF Scale Achievement Affiliation Harmavoidance

Internal Consistency

Time 1 .40 .18 .58 Time 2 .26 .44 .42 Time 3 .25 .36 .50 Time 4 .24 .39 .43

Parallel Forms

Time 1 .28** .07 .34** Time 2 .17 .22* .22* Time 3 .28** - .04 .31** Time 4 .01 .18 .20

Retest Stability(f.a)

Time 1 - Time 2 (5) .28** .28** .45** Time 1 - Time 3 (17) .49** .18 .42** Time 1 - Time 4 (23) .60** .19* .19 * Time 2 - Time 3 (12) .42** .18* .45** Time 2 - Time 4 (18) .36** .18* .35** Time 3 - Time 4 (6) .53** .06 .49 **

Table continued...

PRF Scale Nurturance Order

Internal Consistency Time 1 .34 .29 Time 2 .32 .48 Time 3 .13 .41 Time 4 .13 .30

Parallel Forms

Time 1 .09 .14 Time 2 .20* .26** Time 3 .13 .17 Time 4 .03 .20*

Retest Stability(f.a)

Time 1 - Time 2 (5) .36 .28** Time 1 - Time 3 (17) .28** .32** Time 1 - Time 4 (23) .11 .43* Time 2 - Time 3 (12) .23* .46** Time 2 - Time 4 (18) .18 .36** Time 3 - Time 4 (6) .50** .38**

Notes: (f.a) The number of weeks between testing sessions is given in parenthese s. The number of subjects responding at Times 1, 2, 3, and 4 is 100, 98, 94, and 92, respectiv ely. However, the analyses are based on 69 to 100 subjects because the composite latency measu re cannot be calculated for subjects who endorse or reject all items.

* p < .05;

** p < .01


Correlations between Trait Level Measures and Corresponding Differential Respons e Latencies

Mean Endorsed PRF Mean Rejected PRF

Item Latency (E) Latency (R)

Trait Level Indicant Time Time

1 2 3 4 1 2 3 4

Achievement PRF Scale Score - .38 - .44 - .19 - .22 .32 .26 .26 .35 Self - Rating - .20 - .26 - .13 - .34 .23 .22 .19 .20 Peer - Rating - .19 - .25 - .12 - .17 .17 - .04 .03 .21

Affiliation PRF Scale Score - .39 - .60 - .55 - .40 .07 .22 .19 .12 Self - Rating - .47 - .53 - .43 - .41 - .01 .19 .02 .08 Peer - Rating - .31 - .45 - .31 - .21 .17 .15 .11 .03

Harmavoidance PRF Scale Score - .46 - .45 - .37 - .29 .63 .55 .58 .46 Self - Rating - .38 - .31 - .28 - .25 .51 .38 .39 .34 Peer - Rating - .34 - .35 - .10 - .20 .40 .39 .37 .16

Nurturance PRF Scale Score - .60 - .51 - .27 - .38 .35 .37 .27 .17 Self - Rating - .41 - .46 - .21 - .12 .36 .17 .08 .08 Peer - Rating - .38 - .32 - .39 - .38 .29 .22 .11 .21

Order PRF Scale Score - .42 - .51 - .42 - .40 .39 .42 .55 .18 Self - Rating - .43 - .42 - .35 - .34 .33 .37 .37 .17 Peer - Rating - .27 - .41 - .35 - .26 .29 .27 .40 .11

Table continued...

E - R Irrelevant E - R(f.1)

Trait Level Indicant Time Time 1 2 3 4 1 2 3 4

Achievement PRF Scale Score - .41 - .42 - .32 - .48 .02 - .15 - .06 - .14 Self - Rating - .27 - .41 - .22 - .33 - .03 - .04 - .03 - .01 Peer - Rating - .21 - .10 - .08 - .26 .02 .01 - .00 - .07

Affiliation PRF Scale Score - .19 - .39 - .36 - .23 - .02 - .05 - .10 - .01 Sel f - Rating - .15 - .34 - .16 - .21 - .00 .03 - .01 - .05 Peer - Rating - .15 - .27 - .20 - .09 - .10 .07 - .04 .03

Harmavoidance PRF Scale Score - .66 - .65 - .60 - .49 - .04 .00 .13 .08 Sel f - Rating - .54 - .45 - .42 - .38 - .02 .06 .18 .13 Peer - Rating - .45 - .48 - .29 - .24 - .10 .03 .02 .05

Nurturance PRF Scale Score - .48 - .51 - .33 - .29 - .10 - .25 - .11 - .06 Sel f - Rating - .43 - .33 - .14 - .11 - .10 - .14 - .08 .02 Peer - Rating - .36 - .31 - .22 - .33 - .12 .01 .02 - .10

Order PRF Scale Score - .55 - .58 - .62 - .40 - .16 - .06 - .08 - .01 Sel f - Rating - .51 - .49 - .45 - .34 - .10 - .10 - .04 .04 Peer - Rating - .34 - .43 - .47 - .25 - .11 - .07 - .11 .05

Notes: Correlations of .18 and .24 are significant at the .05 and .01 levels, re spectively.

(f.1) The Irrelevant E - R index is the mean correlation of the Composite latenc y with the specified trait level indicant for the four irrelevant traits.


Multimethod Factor Analysis of Five Traits by Four Methods Rotated to Target (Ti me 1)


I II III (Achievement) (Affiliation) (Harmance) avoida nce)

Scale Score Achievement .83 - .04 - .07 Affilia tion .02 .87 - .02 Harmavoidance .07 - .01 .92 Nurturance - .04 .01 .01 Order .03 .00 .06

Self - Rating Achievement .78 .08 - .02 Affilia tion - .06 .82 - .07 Harmavoidance .04 .00 .82 Nurturance - .11 .19 .02 Order .03 .03 .01

Peer - rating Achievement .61 - .22 .12 Affilia tion - .10 .49 - .20 Harmavoidance - .12 - .28 .64 Nurturance .18 .14 .08 Order .05 - .06 - .14

Composite Latency Achievement - .63 - .06 .02 Affilat ion - .02 - .42 .03 Harmavoidance - .03 .04 - .67 Nurturance - .13 .24 .09 Order - .02 .11 - .08

Table continued...

IV V (Nurturance) (Order)

Scale Score Achievement .04 .01 Affiliation .10 - .01 Harmavoidance .06 - .04 Nurturance .85 - .04 Order .05 .93

Self - Rating Achievement .09 - .09 Affiliation - .00 .01 Harmavoidance .02 .05 Nurturance .67 - .03 Order .06 .91

Peer - rating Achievement - .07 - .05 Affiliation - .00 - .14 Harmavoidance .19 .01 Nurturance .74 .09 Order - .07 .81

Composite Latency Achievement - .10 - .25 Affilation .00 .01 Harmavoidance .25 .02 Nurturance - .49 .11 Order .13 - .51