Statistics as Principled Argument

Statistics as Principled Argument

Statistics as Principled Argument

Statistics as Principled Argument


In this illuminating volume, Robert P. Abelson delves into the too-often dismissed problems of interpreting quantitative data and then presenting them in the context of a coherent story about one's research. Unlike too many books on statistics, this is a remarkably engaging read, filled with fascinating real-life (and real-research) examples rather than with recipes for analysis. It will be of true interest and lasting value to beginning graduate students and seasoned researchers alike.

The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric. Five criteria, described by the acronym MAGIC (magnitude, articulation, generality, interestingness, and credibility) are proposed as crucial features of a persuasive, principled argument.

Particular statistical methods are discussed, with minimum use of formulas and heavy data sets. The ideas throughout the book revolve around elementary probability theory, t tests, and simple issues of research design. It is therefore assumed that the reader has already had some access to elementary statistics. Many examples are included to explain the connection of statistics to substantive claims about real phenomena.


This book arises from 35 years of teaching a first-year graduate statistics course in the Yale Psychology Department. When I think back over this time span I am struck both by what has changed in teaching statistics, and by what has remained the same.

The most obvious changes are effects of the computer revolution. In the 1950s a Computing Laboratory was a room containing a large collection of mechanical calculators--ungainly Monroes and Marchants with push buttons for entering data, and an interior assembly of ratchets for carrying out arithmetic operations. From the clickety-clack, you could usually tell at a considerable distance how many students were working on their data late at night. I confess to occasional nostalgia for this period of earnest, pitiful drudgery, but then I come to my senses and realize that in the old days, statistical analysis was not merely noisy; it also took forever, and was filled with errors. Nowadays, of course, computing facilities and statistical packages for both mainframes and PCs have vastly enlarged the possibilities for fast, complex, error-free analysis. This is especially consequential for large data sets, iterative calculations, multifactor or multivariate techniques, and--as heralded by the founding of the Journal of Computational and Graphical Statistics in 1992--for the use of computers in connection with graphical procedures for exploring data (see also Cleveland, 1993; Schmid, 1983; Wainer &Thissen, 1993). I do not mean to suggest that computers eliminate stupidity--they may in fact encourage it. But well-conceived analyses can be done with extraordinarily greater speed and in much greater detail than was possible a few short decades ago.

Other noteworthy developments in the last 20 years are: Exploratory Data Analysis (Hoaglin,Mosteller, &Tukey, 1983, 1985, 1991; Tukey, 1977), which sharply shifts emphasis away from statistical significance tests toward freewheeling search for coherent patterns in data; log-linear models (Goodman, 1970; Wickens, 1989) to analyze the frequencies . . .

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.