Findings that decision makers can come to different conclusions depending on the order in which they receive information have been termed the "information order bias." When trained, experienced individuals exhibit similar behaviors; however, it has been argued that this result is not a bias, but rather, a pattern-matching process. This study provides a critical examination of this claim. It also assesses both experts' susceptibility to an outcome framing bias and the effects of varying task loads on judgment. Using a simulation of state-of-the-art ship defensive systems operated by experienced, active-duty U.S. Navy officers, we found no evidence of a framing bias, while task load had a minor, but systematic effect. The order in which information was received had a significant impact, with the effect being consistent with a judgment bias. Nonetheless, we note that pattern-matching processes, similar to those that produce inferential and reconstructive effects on memory, could also explain our results. Actual or potential applications of this research include decision support system interfaces or training programs that might be developed to reduce judgment bias.
For nearly 30 years, the study of how human judgment deviates from normative theories of decision making (i.e., the study of "human judgment bias") was one of the most heavily researched and widely cited areas in psychology (e.g., Hogarth, 1987; Kahneman, Slovic, & Tversky, 1982; Nisbett & Ross, 1980). This research indicates that humans handle complex situations by simplifying and by applying a limited set of heuristics, or rules of thumb, that only approximate the kinds of information and processes specified by normative theories. Although this research indicated that these approximations were often adequate and greatly increased our capacity to process large quantities of ambiguous information, their use sometimes led to significant and systematic deviations from the prescripts of normative theories.
Recently, however, a number of decision-making researchers, particularly those working in more applied areas such as training and decision support, have voiced a concern about the generality of this research. Specifically, they claim that demonstrations of human judgment bias have involved conditions that are too sterile and too contrived to generalize to real world situations (Cannon-Bowers, Salas, & Pruitt, 1996). Rather, they argue, if the results are to be applied with any confidence, what is needed is the study of decision making under more realistic conditions, i.e., the study of naturalistic decision making (NDM). Interest in NDM has increased, as indicated by the publication of two books (Klein, Orasanu, Calderwood, & Zsambok, 1993; Zsambok & Klein, 1997) and a special issue of Human Factors, "Decision Making in Complex Environments" (Salas & Cannon-Bowers, 1996), devoted to this topic.
Some studies in more naturalistic contexts have yielded results similar to those from classical decision-making research. Ashton and Ashton (1988; 1990), for example, found that professional auditors were more influenced by negative than by positive information and were affected by how they received information, all at once or one piece at a time. Similarly, Adelman, Tolcott, and Bresnick (1993) found that information presented all at once had a different effect on U.S. Army air defense operators' judgments than did the same information presented over a period of time. When conflicting information was presented over time, that study found that the operators were more heavily affected by evidence introduced later (i.e., a recency effect). Entin and his colleagues (Entin, 1992; Entin, Serfaty, & Forester, 1989) also found a recency effect in the judgments of military intelligence analysts when evidence was presented over time and was conflicting.
On the other hand, when evidence was ambiguous, but essentially neutral, Tolcott, Marvin, and Lehner (1989) revealed that Army intelligence analysts showed a primacy effect. …