children? What kind of people?" Of the total sample, ninety percent answered "yes" to the first part. But only one interviewer predicted ninety percent yes answers, whereas eleven predicted seventy-five percent and five fifty percent. To the second part, four percent of the respondents answered "whites." Of the seventeen interviewers, three predicted that this answer would be given in ten percent or less of the cases, seven predicted twenty- five percent, and seven predicted fifty percent or more.
This question was valuable in estimating people's reactions to birth control, and the near unanimity of the answers leads to important substantive conclusions. The credibility of this datum is increased by the fact that it surprised the interviewers themselves and was not the result of any of the preconceived notions.
We thus have reason to believe that the interviewers neither tried to influence the respondents to follow their own opinions, nor did they distort the answers according to their preconceived notions of the results. A general appraisal of interviewer effect on substantive questions is shown in Table 21, and it can be seen that it is not strong. However, there is some effect attributable to the differences between interviewers on specific questions. These can be better assessed by techniques especially designed for this purpose. We turn now to an extended and more quantitative consideration of the interviewer performance in our survey and the personality traits which seem related to successful performance.
Interviewer Abilities and Interviewer Performance
In the study which we are describing most of the data were collected by interviews. As has been pointed out, most of the data of the social sciences are collected in this way.24 In this particular technique employed, a structured survey, it is possible to identify each item of information according to the source (the informant), the recipient (the interviewer), the codification, and the manner of information collection. Since inaccuracies, errors, and omissions can be noted, methodological defects are clearly apparent, more so than in less formal interviewing methods. In other sections we have discussed the organization, selection, and training of interviewers and the general approach which they used in their work. In this part we shall concentrate on the interviewing process itself, noting the distortion and loss of information which may occur. However, although we can measure errors and spend some effort in doing so, we should not assume that these errors are particularly high or vitiate the substantive results.
For a general understanding of the interviewing process this study cannot give definitive results, but is mainly intended to show the way. With only seventeen interviewers and a large array of possible measures, analysis could easily become an assembly of case studies, as each specific occurrence can be explained by a unique combination of factors. Further, every effort was made, for practical reasons, to keep the variability between interviewers small, and thus it is unlikely that we can obtain sufficiently high relationships definitely to prove any hypotheses. On the other hand, it seems to us that the construction of various measures of interviewing is valuable in itself. This is true of interviewer traits, but even more so of evaluation of interviewers' performance. In most studies, criteria of the interviewers' work are gross measures, such as quit-rate or supervisors' ratings. These are insufficient for either understanding interviewing or for relating to other traits.
We shall evaluate the interviewer in two ways: in her general work within the interviewing team, which is similar to other work requirements; and in her role in the communication process with the respondent. For the latter purpose we shall develop measures from the ques-____________________