Academic journal article Human Factors

Measuring the Fit between Human Judgments and Automated Alerting Algorithms: A Study of Collision Detection

Academic journal article Human Factors

Measuring the Fit between Human Judgments and Automated Alerting Algorithms: A Study of Collision Detection

Article excerpt

INTRODUCTION

Motivation

A common paradigm in human-automation interaction requires a human to make a judgment, in parallel with an automated system, and then to consider, accept, or reject the automated output as appropriate. Such judgments are normally made in concert with displays relaying important aspects of environmental conditions. Designing such automated systems and displays requires an understanding of the attributes of judgment that they are to support and of the impact of their design on human judgment.

Methodologies for assessing and capturing judgments in complex, uncertain domains are therefore important for several reasons. First, these methodologies can allow one to understand the information sought by a human judge about the environment and how this information is used to form judgments. Second, this understanding can be used the drive the design of displays that inform judgment. Third, an understanding of human judgment can be used to influence the design of displays and training such that they effectively support human judgment in a manner consonant with the underlying alerting algorithms driving the automated systems that make or suggest judgments.

This paper details a methodology for measuring human judgment in general and the fit between human judgment and the output of alerting algorithms in particular. Building on judgment analysis methods, we demonstrate the utility of using n-system lens modeling to directly compare the judgment correspondence between human judgments and those provided by automated systems. Specifically, this method can highlight the degree of agreement between human judgment and automated system judgments, thereby predicting situations in which the human may disagree with, and potentially not rely on, an automated system. To demonstrate this methodology, a study previously analyzed using nomothetic methods is examined using the n-system model.

Required Capabilities in a Measurement Methodology

Studies of human judgment must consider both human behavior and the ecology in which the judgments are made. Take, for example, the case of a pilot judging whether an aircraft on a parallel approach is deviating from its approach path to a collision course with the pilot's own aircraft. As shown schematically in the top panel of Figure 1, the important entities in such judgment processes involve the environmental criterion (in this example, whether a conflict is actually developing), the human's judgments (in this example, whether the pilot judges a conflict to be developing), and available information by which the human makes the judgment.

[FIGURE 1 OMITTED]

This view of decision making has its basis in the lens model of Brunswik (Brunswik, 1955; Cooksey, 1996: Hammond, 1996), in which humans make judgments of a distal (unknown or hidden) criterion, based on proximal (known) cues that are probabilistically related to the criterion. The extent to which a judgment corresponds to the environmental criterion reflects the degree of success of the decision maker. Using the lens model suggested by Figure 1, top panel, it can be seen that the relationship between judgment and criterion is mediated by the information available to the judge about the environment (shown here as "cues"). The human's judgment policy is defined by which cues are considered and how those cues are combined to result in a judgment. An equivalent definition can be created for the relationship between the cues and the environmental criterion. A methodology, for assessing judgment, then, should be able to capture these elements of judgment strategy.

Some studies of human judgment focus primarily on the relationship among criterion, cues, and judgment; these studies can provide insight into expert behavior, inform display design, and assess system performance. An additional requirement for a methodology is the ability to compare observed human behavior with the judgments that may be formed in parallel by other judges, such as automated systems, shown schematically in the bottom panel of Figure 1. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.