Academic journal article Public Personnel Management

Assessing the Validity of Job Ratings: An Empirical Study of False Reporting in Task Inventories

Academic journal article Public Personnel Management

Assessing the Validity of Job Ratings: An Empirical Study of False Reporting in Task Inventories

Article excerpt

The use of task inventories or structured job analysis questionnaires in job analysis is widespread in both the public and private sectors.[1] Generally, when the task inventory method is used in job analysis, multiple job incumbents and/or supervisors rate a series of task statements on scales such as frequency, importance or difficulty. These ratings are often used to identify critical job tasks in test development and validation, as required by the job-relatedness provisions of the Uniform Guidelines on Employee Selection Procedures.[2] In addition to test development, such ratings are often used in job evaluation, in the design of performance appraisal systems or training programs, and more recently, with the passage of the Americans with Disabilities Act of 1990,[3] to identify the essential functions of a job.

Despite the widespread use of the task inventory method, Harvey[4] has noted that very little attention has been paid to assessing the validity of the obtained task ratings. One strategy for examining the validity of task ratings has been the use of "lie scales" to identify invalid or careless responding.[5] This approach is not unlike the use of "validity scales" in self-report personality inventories as checks on carelessness, distortion and the operation of response sets.[6]

Green and Stutzman[7] administered a task inventory to a sample of mental health workers. Respondents were asked to make relative time spent and importance ratings. Included in the inventory were a number of tasks known to be unrelated to the focal job. A "carelessness index" was calculated based on the number of bogus task statements that received responses other than 0 (no time spent, not important). Green and Stutzman found that 57% of the respondents indicated that they spent time performing bogus tasks and 72% indicated that these bogus tasks were at least somewhat important aspects of their job. Recently, Green and Veres[8] used a similar method in three different samples using a variety of response scales and found the percentage of respondents endorsing bogus items ranged from 12.6% to 70.3%.

In the literature on marketing research a related phenomenon of survey respondents making false claims or expressing an opinion concerning an item about which they actually know nothing has also been observed and has been termed false reporting[9] or uninformed response error.[10] For example, Goldsmith[11] found that 4070 of a survey sample of grocery shoppers indicated awareness of bogus brand names.

Because important decisions are often based on job analysis data, the evidence that substantial percentages of task inventory respondents may provide highly inconsistent and invalid ratings is unsettling. To date no research has addressed what variables may effect levels of false reporting (endorsing a bogus item) in task inventories. Identification of factors influencing false reporting would aid practitioners and have major implications for conducting job analysis and interpreting the results of the job analysis. The present study will focus on the effects of two questionnaire characteristics on false reporting -- the type of response scale used and the method of task inventory administration.

One variable that could influence false reporting in task ratings is the type of rating scale used. In task inventories often some measure of the frequency that a task is performed is obtained. According to Harvey[12] probably the most popular is the relative-time-spent (RTS) scale. The RTS scale asks raters to indicate the amount of time spent on each task relative to other tasks, with relative time anchors such as: No Time, Much Less Time, Less Time, About The Same Time, More Time, and Much more time. While much less popular, another frequency rating scale is the absolute-time-spent (ATS) scale. The ATS scale also asks raters to indicate the frequency of task performance, but uses absolute time anchors, such as Does not perform, Every six months to yearly, Monthly, Every few days to weekly, Every day, and More than once every day. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.