Measuring the Fit between Human
Judgments and Alerting Systems:
A Study of Collision Detection
Amy R. Pritchett and Ann M. Bisantz
Methodologies for assessing human judgment in complex domains are important for both the design of displays that inform judgment and automated systems that suggest judgments. This chapter describes a use of the n-system lens model to evaluate the impact of displays on human judgment and to explicitly assess the similarity between human judgments and a set of potential judgment algorithms for use in automated systems. Specifically, the n-system model was used to examine a previously conducted study of aircraft collision detection that had been analyzed using standard analysis of variance (ANOVA) methods. Our analysis found the same main effects as the earlier analysis. However, the lens model analysis was able to provide greater insight into the information relied on for judgments and the impact of displays on judgment. Additionally, the analysis was able to identify attributes of human judgments that were—and were not—similar to judgments produced by automated alerting systems.
A common paradigm in human–automation interaction requires a human to make a judgment in parallel with an automated system and then to consider, accept, or reject the automated output as appropriate. Such judgments are normally made in concert with displays relaying important aspects of environmental conditions. Designing such automated systems and displays requires understanding the attributes of judgment that they are to support and the impact of their design on human judgment.
Methodologies for assessing and capturing judgments in complex, uncertain domains are therefore important for several reasons. First, these methodologies can allow us to understand the information sought by a human judge about the environment and how this information is used to form judgments. Second, this understanding can be used to drive the design of displays that inform judgment. Third, understanding the degree to which human judgments are similar to the processes and outputs of automated decision support systems is necessary for the design of displays and training such that human operators understand, trust, and appropriately rely on such automated systems.
This research details a methodology for measuring human judgment in general and the fit between human judgment and the output of alerting algorithms (a type of automated decision aid) in particular. Building on judgment analysis methods,