Academic journal article Education & Treatment of Children

A Preliminary Examination to Identify the Presence of Quality Indicators in Single-Subject Research

Academic journal article Education & Treatment of Children

A Preliminary Examination to Identify the Presence of Quality Indicators in Single-Subject Research

Article excerpt

Abstract

Scholars in the field of special education put forth a series of papers that proposed quality indicators for specific research designs that must be present for a study to be considered of high quality, as well as standards for evaluating a body of research to determine whether a practice is evidence-based. The purpose of this article was to pilot test the quality indicators proposed for single-subject research studies in order to identify points that may need clarification or revision. To do this, we examined the extent to which the proposed quality indicators were present in two single-subject studies, both examining the effects of teacher praise on specific behaviors of school-age children. Our application of the quality indicators indicated that neither study met the minimal acceptable criteria for single-subject research. We discuss the use of the quality indicators in relation to their clarity and applicability and suggest points for deliberation as the field moves forward in establishing evidence-based practices.

**********

Advocating that educators base practice on research--in other words, that evidence-based practices be the primary means of instruction utilized in classrooms--first requires that specific practices have been identified as evidence-based. Although not all educators would agree (e.g., Gallagher, 2006), we assert that scientific research is the most reliable means for determining an educational practice to be effective or evidence-based (e.g., Kauffman & Sasso, 2006; Landrum & Tankersley, 2004). But just how should research findings be synthesized to determine the effectiveness of a practice?

By yielding an overall effect size across studies examined, metaanalyses (Glass, 1976; Kavale, 2001) have become popular for synthesizing research findings and their results have advanced our understanding of effective educational practices. However, no established approach currently exists for identifying the quality of studies that are synthesized (Cooper & Hedges, 1994), which may allow a poorly designed and executed study to influence the overall effect size--thereby potentially misidentifying an ineffective practice as effective, or vice versa. Moreover, no firm guidelines clearly establish the minimum number of studies needed to produce reliable meta-analytic results (Cooper & Hedges). Adding to this, no agreed-upon process exists for determining effect sizes (the metric required to conduct a meta-analysis) for single-subject research, although several methods have been proposed and debated (e.g., R2 as discussed by Allison & Gorman, 1993; percentage of non-overlapping data as discussed by Scruggs & Mastropieri, 2001).

To address such matters surrounding the identification of evidence-based practices, researchers in other fields developed and implemented frameworks for determining effective practices. For example, the Division 12 Task Force of the American Psychological Association (Chambless et al., 1998), the National Association of School Psychologists (Kratochwill & Stoiber, 2002), and the What Works Clearinghouse (WWC, established in 2002 by the U.S. Department of Education; http://www.whatworks.ed.gov/) have established guidelines for evidence-based practices in clinical psychology, school psychology, and general education, respectively. Although utilizing an existing framework, such as that developed for general education, for determining evidence-based practices for students with disabilities would be efficient, the WWC did not originally consider single-subject research in determining whether a practice is evidence-based. The WWC has recently added single-case designs as a special type of quasi-experimental designs that, in the absence of severe design or implementation problems, can be categorized as meeting evidence standards with reservations (the same level at which randomized control trials with severe design or implementation problems are categorized). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.