The authors conducted an experiment to determine how a particular design of personality questionnaires influences applicant responses on personality scales. A completely crossed 2 x 2 x 2 design was carried out with real-world applicants and individuals in a job application training program in which speed (with or without a time limit), response format (dichotomous or analogue), and instructions (neutral standard instruction or a repeated warning that people who fake can be detected) were manipulated. Two hundred eight participants completed the Myers-Briggs Type Inventory and a German Interpersonal Circumplex (IPC)-based questionnaire. Although providing a warning showed no influence, response format and the interaction between speed and response format showed a significant effect for some scales.
Key words: personality questionnaire, faking good, social desirability, personnel selection, psychological assessment, response format, instruction, speed
Personality questionnaires are the best known and the most popular tools used to measure personality. However, personality questionnaires often show a high transparency; that is, it is often evident to the test-taker what constructs the test measures. Because test-takers can infer what constructs items may measure, they may distort their responses in order to present themselves favourably. This may be particularly problematic in the context of personnel selection, where applicants may "fake good" in an attempt to secure a job offer (cp. Kanning & Holling, 2001; Karner, 1999, 2002).
Considerable research has shown that even voluntary participants are able to intentionally fake good when instructed to empathize with a selection candidate (Kubinger, 1996; 2002) or to adapt to a given job profile (Hoeth, Büttel, & Feyerabend, 1967; Lammers & Frankenfeld, 1999). Krahé and Hermann (2003) found similar results when analysing the susceptibility of the NEO-Five Factor Inventory (NEO-FFI) to systematic response tendencies. Because of these potential faking effects, data from self-descriptions should always be regarded carefully (Deller & Kuehn, 2003).
Faking tendencies in real-world selection situations, however, are actually fewer than in simulated situations. Some studies show that adjusting personality scores based on social desirability scores does not decrease the validity of a test (Hough et al., 1990; Moorman & Podsakoff, 1992; Ones, Viswesvaran & Reiss, 1996; Ones, Viswesvaran & Schmidt, 1993), and there is even an established opinion that personality questionnaires are valid methods for personnel selection despite their high transparency (Schmidt & Hunter, 1998; cf. also Marcus, 2003). However, the extent to which validity is decreased by the influence of social desirability bias is unknown (Kanning, 2003). Furthermore, because candidates who fake are more likely to be selected than those who answer honestly, faking may make selection systems unfair (Ellingson, Sackett & Hough, 1999; Hough, 1998). Therefore, test-users should take precautions to prevent or reduce applicant faking on personality questionnaires (Hough & Ones, 2002; McFarland, 2003).
Past research has explored whether it is possible to detect individuals who may be faking. Two means of detection have primarily been used: measuring/analysing response latencies (i.e., the time between item responses; Esser & Schneider, 1998; Holden & Hibbs, 1995; Holden, Kroner, Fekken & Popham, 1992; Hsu, Santelli & Hsu, 1989; Kuntz, 1974; Robie et al., 2000; Schneider & Hübner, 1980) and imbedding social desirability scales (a.k.a., lie scales) within personality measures (Crowne & Marlowe, 1960; Edwards, 1957; Hoeth, Büttel & Feyerabend, 1967; Paulhus, 1991; Schneider-Düker & Schneider, 1977). In the detection literature using response latencies, the general assumption is that response latencies indicate the fidelity of the response. …