Two Urinary Catecholamine Measurement Indices for Applied Stress Research: Effects of Time and Temperature until Freezing

Article excerpt

INTRODUCTION

There is increasing evidence that stress at work can have detrimental effects on performance, well-being, and health (Cooper, 1998; Ganster & Perrewe, 2001; Kahn & Byosiere, 1992; Marmot & Wilkinson, 1999; Sonnentag & Frese, 2005). However, the vast majority of studies in occupational stress research have used self-reports for measures of both independent (stressors) and dependent variables (e.g., strain; see Kahn & Byosiere, 1992; Sonnentag & Frese, 2005; Zapf, Dormann, & Frese, 1996). Thus stressor-strain relationships may be overestimated because of correlated measurement error (common method variance; see Semmer, Zapf, & Greif, 1996).

Many authors therefore recommend measuring independent and dependent variables with different methods (multimethod approach; see Kahn & Byosiere, 1992). Physiological measures are good candidates for such an approach. One cannot regard them as "the" more objective measures, given that they also suffer from typical errors such as artifact susceptibility and measurement error attributable to devices, detection range of analytical procedures, handling of instruments, occasional influences, and so forth (see Beehr, 1995; Fried & Ferris, 1987). Because these errors are not correlated with errors of self-report, relationships may be underestimated, as Semmer et al. (1996) have shown for job observation methods (which are another candidate for alternatives to self-report). Nevertheless, if handled carefully, physiological measures do offer the potential to avoid common method variance, to yield better estimates of relationships, and to increase the understanding of the processes involved.

However, as soon as one leaves the laboratory and moves into the field, a number of pitfalls exist that might lead to serious errors, thus hampering the interpretability of field study results. One of these potential sources of errors concerns the necessity for storage and transportation (Lundberg, Melin, Fredrikson, Tuomisto, & Frankenhaeuser, 1990). Clinical chemistry shows that it is not possible to assign fixed correction factors for such interferences to urine analysis. Therefore it is recommended that samples be acidified immediately and frozen as soon as possible (Shoup, Kissinger, & Goldstein, 1984). What seems to be feasible, however, is to keep the samples at refrigerator temperature for a while before freezing them. Thus Boomsma, Alberts, van Eijk, Man in 't Veld, and Schalekamp (1993) showed that catecholamines (CAs) are stable at 4[degrees]C in unpreserved urine for 1 month, a finding that was recently replicated for a 10-hr storage period by Miki and Sudo (1998). Miki and Sudo also stored samples at room temperatures; here they found (a) a strong decrease in CA values in unpreserved samples and (b) a tendency toward increasing, rather than decreasing, values in acidified samples. These increases were, however, within a range of 10% for a delay of 1 day.

From the study by Miki and Sudo (1998), it seems clear that (a) immediate freezing is essential for unpreserved samples and (b) delays of more than a day at room temperature are risky for both acidified and unpreserved specimens. More refined studies are needed, however, to clarify whether results are robust with regard to delays until freezing of up to 24 hr. This is a crucial time period for field studies: Freezing within 24 hr usually can be guaranteed under field conditions. Delays of several hours, however, are often difficult to avoid in field work in which urine samples are picked up at the participants' homes of workplaces in different areas (Elfering, Grebner, Semmer, & Gerber, 2002; Grebner, 2001).

Because this time lag of up to 24 hr before freezing is so crucial, we tested the role of delay as a potential source of error and compared it with the measurement error that arises from the accuracy of the laboratory analysis itself. …