Academic journal article Exceptional Children

Improving Research Clarity and Usefulness with Effect Size Indices as Supplements to Statistical Significance Tests

Academic journal article Exceptional Children

Improving Research Clarity and Usefulness with Effect Size Indices as Supplements to Statistical Significance Tests

Article excerpt

In a recent issue of Exceptional Children (EC), Carnine (1997) encouraged special education researchers to provide information that is trustworthy, usable, and accessible to both scholars and practitioners. Carnine noted that the National Academy of Science evaluated educational research generically, and found "methodologically weak research, trivial studies, an infatuation with jargon, and a tendency toward fads with a consequent fragmentation of effort" (Atkinson & Jackson, 1992, p. 20). Others also have argued that "too much of what we see in print is seriously flawed" as regards research methods, and that "much of the work in print ought not to be there" (Tuckman, 1990, p. 22). Gall, Borg, and Gall (1996) concurred, noting that "the quality of published studies in education and related disciplines is, unfortunately, not high" (p. 151).

Indeed, empirical studies of published research involving methodology experts as judges corroborate these holistic impressions. For example, Hall, Ward, and Comer (1988) and Ward, Hall, and Schramm (1975) found that over 40% and over 60%, respectively, of published research was judged by methods experts to be seriously or completely flawed. Wandt (1967) and Vockell and Asher (1974) reported similar results from their empirical studies of the quality of published research.

Fortunately, Carnine (1997) noted that special education research appears to be stronger than other education research as regards criteria such as trustworthiness, usability, and accessibility. But there clearly is room for improvement. Thus, in commenting on Carnine's important article, Sydoriak and Fields (1997) urged that "research findings must be reported in language that is familiar to practitioners" (p. 530). In another comment in the same issue of EC, Morrissey (1997) suggested that proposed IDEA funding probably will and should include requirements to "present professional knowledge bases in a clear and meaningful manner to affected persons at all levels of the service systems" (p. 532).

PURPOSES

One area for potential improvement in providing research results that are trustworthy, usable, and accessible involves the use of effect sizes to supplement the interpretation of statistical significance tests. The present article examines actual reporting practices within quantitative research reports published in EC in the context both of recent criticisms of misuses of statistical significance tests (cf., Cohen, 1994; Kirk, 1996; Schmidt, 1996; Thompson, 1996, 1998b) and the publication guidelines of the American Psychological Association (APA, 1994). As the author guidelines for EC note, "manuscripts sent to Exceptional Children are submitted to a review process only if [among other things] ... format conforms to the standards in the Publication Manual of the American Psychological Association (4th edition, 1994)" (The Council for Exceptional Children, 1997, p. 139).

The present study had two purposes. First, an effort was made to characterize recent reporting practices in the journal in relation to emerging changes in reporting expectations. Second, the article seeks to provide further enlightenment for the field regarding both the limitations of statistical tests and the benefits of supplementing these tests by reports of effect sizes.

FRAMEWORK FOR EVALUATING REPORTING PRACTICES IN EC

The recent fourth edition of the American Psychological Association style manual (APA, 1994) emphasized that p values are not acceptable indices of result effect size:

   Neither of the two types of probability values [statistical significance
   tests] reflects the importance or magnitude of an effect because both
   depend on sample size ... You are [therefore] encouraged to provide
   effect-size information. (APA, 1994, p. 18, emphasis added)

Effect size reporting and interpretation are all the more important given the limitations of statistical significance tests. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.