Academic journal article School Psychology Review

Effect Sizes in Single Case Research: How Large Is Large?

Academic journal article School Psychology Review

Effect Sizes in Single Case Research: How Large Is Large?

Article excerpt

Abstract. This study examined the problem of interpreting effect sizes in single case research. Nine single case analytic techniques were applied to a convenience sample of 77 published interrupted time series (AB) datasets, and the results were compared by technique across the datasets. Reanalysis of the published data helped answer questions about the nine analytic techniques: their effect sizes, autocorrelation, statistical power, and intercorrelations. The study's findings were that few effect sizes matched Cohen's (1988) guidelines, and that effect sizes varied greatly by analytic technique. Four techniques showed adequate power for typical published data series. Autocorrelation was a sizeable problem in most analyses. In general, individual techniques performed so differently that users need technique-specific information to guide both selection of an analytic technique and interpretation of its results.

**********

The debate on the usefulness of statistical analysis with single case research data has largely been resolved over the past decade. Though it is acknowledged that no present statistical technique can adequately reflect the range of criteria available to visual analysis (Baer, 1977; Michael, 1974; Parsonson & Baer, 1992), statistical analysis is now regarded by most experts as a useful supplementary technique in many circumstances. Even strong proponents of visual analysis (Huitema, 1986; Kazdin, 1982) acknowledge that statistical results can be valuable or even essential when there is no stable baseline, when unambiguous results must be shared with other professionals, and when effects of a new treatment cannot be predicted. Interestingly, the first two of these three conditions are common. Although stable, flat baselines are desirable, they are often not found--even in published data. Of 77 published graphs composing the convenience sample for this study, nearly 66% had noticeable positive or negative baseline trend, and over 50% of the baselines possessed high variability.

The second condition arguing for statistical analyses, the need to document and share data unambiguously, is commonplace with shrinking external funding, and increased accountability for use of those funds. Funding agents increasingly require evaluation of client interventions to establish treatment efficacy through objective, quantifiable data. Although visual analysis conclusions are convincing to individual clinicians, evidence converging from multiple studies informs one that visual judgments of graphed data are notoriously unreliable. Finally, the application of meta-analysis to single case research has brought to focus the need for valid, objective measures of treatment effects that can be communicated beyond the walls of a particular clinical context, and compared with results from other environments.

The acknowledged benefit of statistical analysis as a supplementary technique in many or most circumstances has not, however, translated to its broad use in published data. In this study's sample of 124 articles from counseling, clinical, and school psychology journals over the past 15 years, over 65% used only visual analysis. Nonstatistical comparisons of means, medians or proportions comprised most of the remaining 35%. Effect sizes, confidence intervals or tests of statistical significance were found in only 11% of the 124 articles. This prevalence is comparable to the 10% prevalence rate for statistical analyses found in earlier, larger surveys 10 and 25 years ago (Busk & Marascuilo, 1992; Kratochwill & Brody, 1978).

The underuse of statistical analysis of single case research data is the context for the present study. This underuse is understandable, as researchers or clinicians wishing to supplement visual with statistical analyses have available a number of techniques, but little information on how any of them performs. The number of analytic techniques available for short data series has easily tripled since the early 1980s (Barlow & Hersen, 1984; Kazdin, 1982), yet promising techniques such as the regression models of Center et al. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.