achieves its highest validity when the subjects' responses are anonymous, when the criterion is self-reported, when the subjects are students, and when the respondent is aware that the investigator has another source of information on the subject's honesty. We argue that these moderators affect the validity of the Dishonesty scale by affecting the accuracy of the subject's responses. While the residual variance for the distribution of all studies is not large, the partitioning of the data into the hypothesized moderator subgroups reduces the size of the remaining variance.
The data refute the hypothesis that theft criteria are not reliable measures. If the criteria were very unreliable, the criteria correlations with the Dishonesty scale or any other variable would cluster around zero. This was not the case.
This meta-analysis demonstrates that the Dishonesty scale studies conducted to date report sufficient information such that, when cumulated, informative conclusions may be drawn. This does not imply, however, that there is no room for improvement in the studies' reporting practices. Future studies should report the manner in which the sample members were selected, the intercorrelations among the predictors, the mean and variance of the predictor, the intercorrelations among the criteria, the mean and variance of the criterion, the criterion reliability, and the zero-order validities for the theft scale. Estimates of the population mean and variance of the Dishonesty scale should also be calculated.
A more focused research effort would improve knowledge in this area. Future studies should examine whether validity covaries with criterion type, occupational group, and testing conditions (e.g., anonymity and awareness of alternative measures of an employee's honesty). Most of the studies in this review are based on anonymous, self-report criteria, which are substantially different from those obtained in operational settings and may overestimate the validity to be expected in operational settings. Thus, the generalization of these results to operational testing situations is in doubt. More research is needed using methods that mirror the conditions under which the test is operationally used.
Hirsch H. R., & McDaniel M. A. ( 1986). "Developing decision rules for meta-analysis". In M. A. McDaniel (chair), An overview and new directions in the Hunter, Schmidt, Jackson meta-analysis technique. Symposium presented at the First Annual Conference of the Society for I/O Psychology, Chicago.
Hunter J. E., & Hunter R. F. ( 1984). "Validity and utility of alternative predictors of job performance". Psychological Bulletin, 96, 72-98.
Hunter J. E., Schmidt F. L., & Jackson G. B. ( 1982). Meta-analysis: Cumulating research findings across studies. Beverly Hills, CA: Sage.
Jones J. W., & Terris W. ( 1982). Convicted felons' attitudes toward theft, violence, and illicit drug use. Paper presented at the Seventh Annual Convention of the Society of Police and Criminal Psychology, New Orleans.
Jones J. W., & Terris W. ( 1983). Human factors in organizations: 1. Screening nuclear