Signal detection theory, as developed in electrical engineering and based on statistical decision theory, was first applied to human sensory discrimination 40 years ago. The theoretical intent was to provide a valid model of the discrimination process and the methodological intent was to provide reliable measures of discrimination acuity in specific sensory tasks. The first studies in the psychology laboratory demonstrated that decision factors are fundamentally involved in even the simplest discrimination tasks. In a detection task, the observer decides how likely the presence of a signal must be before he or she will report that a signal is present rather than just noise. In a recognition task, in which a signal is known to be present, the observer decides how likely the presence of Signal A must be relative to Signal B in order to report A. In other words, the observer sets a decision criterion, or a response threshold, along a probabilistic decision variable. The immediate import of this finding was to undermine the venerable concept of a physiologically determined sensory threshold. The decision criterion is set intelligently in accordance with the observer's perception of the prior probabilities of the two possible stimuli and of the various benefits and costs of correct and incorrect responses. The first studies showed also that an analytical method of detection theory, called the relative operating characteristic (ROC), can isolate the effect of the placement of the decision criterion, which may be variable and idiosyncratic, so that a pure measure of intrinsic discrimination acuity is obtained. The model and methods were then used in other areas of psychology in which discrimination is central, including recognition memory, conceptual judgment, and animal learning.
For the past 20 years, ROC analysis has also been used to measure the discrimination acuity or inherent accuracy of a broad range of practical diagnostic