Statistics and Detection Theory
Statistics is commonly divided into two parts. In descriptive statistics, a data set is reduced to a useful measure—a statistic—such as the sample mean or observed proportion. Detection theory includes many possible statistics of sensitivity [d′ α, p(c), etc. ] and of bias, and this book has been well stocked with (descriptive) statistics.
Inferential statistics, on the other hand, provides strategies for generalizing beyond the data. In chapter 2, for example, we met an observer who was able to correctly recognize 69 of 100 Old faces while producing only 31% false alarms, and thus boasted a d′ of 1.0. As a measure of sensitivity for these 200 trials, this value cannot be gainsaid, but how much faith can we have in it as a predictor of future performance? If the same observer were tested again with another set of faces, might d′ be only 0.6 or even 0.0?
The statistician views statistics, such as sensitivity measures, as estimates of true or population parameters. In this chapter, we consider how statistics can be used to draw conclusions about parameters. The two primary issues are: (a) How good an estimate have we made? What values, for example, might true d′ plausibly have? and (b) Can we be confident that the parameter values, whatever they are, differ from particular values of interest (like 0) or from each other? These two problems are called estimation and hypothesis testing.
The chapter is in four sections. First, we consider the least processed statistics, hit and false-alarm rates. Second, we examine sensitivity and bias measures. The third section treats an important side issue—the effects of averaging data across stimuli, experimental sessions, or observers. For all these topics, the primary model considered is equal-variance SDT, and the
discussion of hypothesis testing is limited to hypotheses about one parameter or the difference between two parameters. The final section shows how