Just two numbers are necessary to analyze data from most experiments: mean and standard deviation. These two numbers resolve the basic difficulty that a sample mean is an uncertain estimate of the true mean of the population from which the sample is drawn.
Ideally, therefore, we desire an interval around the sample mean that is likely to contain the population mean. The best possible interval would be one that gives us specified, known confidence that it contains the population mean. This interval tells us the likely error of the sample mean.
The miracle of statistics is that it can give us the best possible. For example, an interval extending one standard deviation either side of the sample mean will contain the population mean with about 67% confidence in most applications. If you desire an interval that gives greater confidence, statistics can provide it. These confidence intervals provide a base for significance tests, which serve as evidence that an experimental treatment has had a real effect on behavior.
Two easy formulas are enough to get confidence intervals. With these two formulas, you can do much of your data analysis, sometimes all.
The confidence interval has an empirical lesson: Reduce the variability of your data. This variability determines the standard deviation, which in turn determines the width of your confidence interval, that is, the likely error of your sample mean. The lower the variability in your data, the lower is your likely error.
Reducing variability is mainly accomplished through experimental procedures. In addition, statistical techniques can provide inestimable aid. These two approaches, empirical and statistical, are stressed throughout this book. They embody the Empirical Direction in Design and Analysis.