When--as in the examples of the last chapter--all the cell means in a large table differ by several standard errors, there is overwhelming evidence of some kind of relationship among the variables. But when cells are few, or the means not well separated, it is often unclear whether the variables are really correlated or not. A significance test is a method of evaluating the evidence in these cases. The test to be used in a given situation depends on the kind of relationship to be tested, its form, the kind of variables used, the amount and kind of data, and sometimes the convenience of the researcher. But the basic rationale underlying all tests is the same.
Essentially, a significance test consists of calculating the risk that an observed correlation might be a purely accidental result, generated by the random behavior of uncorrelated variables. Any indication of correlation is based on observed sample evidence, but chance variation can produce the same kind of "evidence," even when the variables are uncorrelated. The heart of all significance tests is thus the calculation of the probability of observing the same evidence in a hypothetical case of no correlation. The lower this risk, the greater is the significance of the evidence, and it is natural to refer to the calculated risk as the significance level of the result.
The study of the difference between two sample means provides a basic introduction to significance tests. Table 6.1 contains information on the