Multiple determination is a central problem in every science. The effect of any one variable may depend on context and situation, that is, on what other variables are operative. Many experiments seek to isolate and study single variables. To understand perception, thought, and action, however, requires analysis of how multiple variables act in concert.
Factorial design provides a systematic approach to multiple determination. Although factorial design seems prosaic, it turns out to have important advantages, illustrated with the P-Q comparison of Figure 5.1. Factorial design, accordingly, has become a mainstay of experimental analysis.
A conceptual discontinuity is involved in the shift from a single variable, studied in previous chapters, to two (or more) variables, taken up in this and following chapters. This discontinuity stems from the necessity to make some assumption about the joint, integrated action of the two variables.
Analysis of variance bridges this discontinuity in a crude but often useful way. Its main assumption is that the two variables add; any difference between this additivity assumption and the actual data is handled by the brutal, Procrustean method of calling the difference a residual and including it in the Anova model. There is thus an ad hoc residual for every combination of the two variables, that is, for every cell of the factorial design. Sometimes these residuals, also called interactions, are meaningful, but more often they are not. This troublesome issue is broached in the discussion of Figure 5.4 but is mainly deferred to Chapter 7.
Virtually all material of the previous chapters on one-way design transfers directly to factorial design. This includes formulas for SSs, MSs, and Fs, confidence intervals, and power. Novel, however, is the concept of factorial graph, which facilitates visual inspection as illustrated in Figure 5.3.