Components of Sensitivity
What determines the degree to which two stimuli can be distinguished? Detection theory offers a two-part answer: Sensitivity is high if the difference in the average neural effects of the two are large or if the variability arising from repeated presentations is small. Common measures of accuracy like d′ are accordingly expressed as a mean difference divided by a standard deviation. In most of the applications we have considered, changes in sensitivity are equally well interpreted as changes in mean difference or variability, and attributing such effects to one source or the other is both impossible and unnecessary. In the early chapters of this book, we therefore suppressed the role of distribution variances, dealing only with mean differences and standard deviation ratios.
When the experimental situation is expanded beyond two stimuli, the locus of a sensitivity effect may become clear. If three stimuli differ along a single dimension–light flashes varying only in luminance, for example– and the extreme stimuli are more discriminable than the adjacent ones, systematic increases in mean effect provide the simplest interpretation. If the perceptibility of a stimulus decreases when another must also be detected, as in uncertain detection designs, it is natural to imagine that variance rather than mean difference has been affected by the demands of attention. Our treatments of these problems in chapters 5 and 8 adopted exactly these interpretations.
In the pure two-stimulus world, disentangling these two contributions to sensitivity requires another approach. A starting point is to ask whether there is variability within a stimulus class itself, and perusal of our several examples reveals the answer to be: sometimes. Absolute auditory detection typifies one case: Every presentation of a weak tone burst is the same, so all the variability must arise from processing. The variance is entirely internal.