System Operator Response to Warnings of Danger
There are many instances in which system operators tend to respond slowly or not at all to warnings of danger and some instances in which the operators disable the warning devices. A primary reason is that the warnings have "cried wolf" too often to be credible (e.g., Sorkin, 1988). Moreover, the penalties of leaving the operational task to respond to false alarms may be considerable; thus, for example, the Federal Aviation Administration not long ago ordered a shutdown of collision-warning devices on commercial airliners because of serious distractions they represented both to pilots and air traffic controllers.
In this chapter, I treat quantitative aspects of crying wolf in terms of the positive predictive value (PPV) of a warning; that is, the probability that a warning will truly indicate some specified dangerous condition. I consider first theory and quantification of the PPV and then present a laboratory experiment in which participants were exposed to different values of PPV. The participants performed a continous, manual tracking task at a computer workstation, during which randomly occurring warnings required them to leave the tracking task to make a specified response to the warning. With a bonus scheme, premiums were placed on carefully setting an automated tracker before leaving the tracking task and on responding quickly to the warning. Over five conditions, the PPV of the warning was set variously at 0.25, 0.50, 0.61, and 0.75.
A practical concern is that values of PPV sufficient to ensure a reliable response are difficult to attain. The reason is that even very sensitive detectors will have a low PPV because the prior probability (base rate) of a dangerous condition is usually very low. The problem is exacerbated by the tendency of system engineers to set detector thresholds for issuing a warning leniently enough to achieve a very low probability that the detector will miss a dangerous