Supporting Situation Assessment
through Attention Guidance and
Diagnostic Aiding: The Benefits
and Costs of Display Enhancement
on Judgment Skill
William J. Horrey, Christopher D. Wickens, Richard Strauss, Alex Kirlik, and Thomas R. Stewart
Many operational environments are characterized by large amounts of dynamic and uncertain information presented to performers on technological interfaces. To perform accurately and consistently in such environments, people must manage, integrate, and interpret this information appropriately to formulate an accurate assessment of the current situation. In the battlefield environment, for example, effective commanders must perceive and integrate a wide range of tactical, organizational, and environmental information, information guiding planning (Graham & Matthews, 1999), as well as an array of potentially fallible information from a number of different sources (e.g., Wickens, Pringle, & Merlo, 1999). Regardless of the context, the extent to which performers can successfully integrate these sources of information into a coherent assessment will directly impact their situation awareness as well as their subsequent decisions and actions.
In this chapter, we present a study of human performers' ability to integrate multiple sources of displayed, uncertain information in a laboratory simulation of threat assessment in a battlefield environment. Two different types of automated aids were used to enhance the situation display, the first guiding visual attention to relevant cues, the second recommending an actual judgment. We assessed performance in terms of skill score (Murphy, 1988), as well as its decomposition using Stewart's (1990) refinement of Murphy's skill score measure using Brunswik's lens model (see Goldstein, this volume; Cooksey, 1996). Results indicated that the introduction of display enhancement in this task had both benefits and costs to performance. Modeling aided us in providing plausible interpretations of these (in some cases counterintuitive) effects.
Attention guidance automation, for example, actually appeared to decrease performance by narrowing the variance (or range) of aided performers' judgments relative to the variance of unaided judgments. We found, however, that this result would be consistent with the design of automation in this task, to the extent that participants may not have used the attention guidance cue for guidance but rather as a judgment cue itself (the attention guidance cue, too, had lower variance than the task criterion). In contrast, performance did improve when automation instead recommended an actual judgment. By using an automation failure or “catch” trial designed to detect automation overreliance, we found that this automation benefit was due almost