Academic journal article Exceptional Children

Council for Exceptional Children: Standards for Evidence-Based Practices in Special Education

Academic journal article Exceptional Children

Council for Exceptional Children: Standards for Evidence-Based Practices in Special Education

Article excerpt

This statement presents an approach for categorizing the evidence base of practices in special education. The quality indicators and the criteria for categorizing the evidence base of special education practices is intended for use by groups or individuals with advanced training and experience in educational research design and methods.

These quality indicators and criteria only apply to studies examining the effect of an operationally defined practice or program on student outcomes. For example, programs or practices that improve instructor or parent behaviors, even if those behaviors have been shown to improve student outcomes, do not fall within the purview of this approach. Moreover, reviews of practices should be specific to an outcome area and learner population. That is, reviews should set clear parameters on a targeted outcome (e.g., reading comprehension) and a targeted learner population (e.g., children with learning disabilities, preschoolers with developmental delays, adolescents with autism, K-3 struggling readers, K-12 students with disabilities). Reviews might also be specific to a setting (e.g., public schools, inclusive classes) or type of interventionist (e.g., paraprofessionals).

Studies need not be published in a peer-reviewed journal to be included in a review using these standards. However, studies must be publicly accessible.

The work of Gersten et al. (2005) and Horner et al. (2005) guided the development of these standards, which may be viewed as a refinement of their foundational and exceptional scholarship. In developing the standards, Council for Exceptional Children (CEC)'s Evidence-Based Practice Workgroup also drew from a number of other sources for categorizing the evidence base of practices (e.g., What Works Clearinghouse) and incorporated the feedback of 23 anonymous special education researchers who kindly participated in a Delphi study. The CEC is indebted to Gersten et al., Horner et al., and the Delphi study participants, without whom this work would not have been possible.

RESEARCH DESIGNS

CEC's approach to categorizing the evidence base of practices in special education considers two research methods: group comparison research (e.g., randomized experiments, nonrandomized quasi-experiments, regression discontinuity designs) and single-subject research. The rationale is that causality can be reasonably inferred from these designs when they are well designed and conducted.

In experimental group comparison designs, participants are divided into two or more groups to test the effects of a specific treatment manipulated by the researcher. The standards consider group comparison studies in which researchers study treatment and comparison groups through random (in randomized controlled trials) and nonrandom (e.g., group quasi-experimental designs, including regression discontinuity designs) assignment. Single-subject experimental designs use participants (individuals or groups) as their own control and collect repeated measures of dependent variables over time to test the effects of a practice manipulated by the researcher. The standards consider single-subject designs that systematically address common threats to validity and reasonably demonstrate experimental control. For example, appropriately designed and conducted ABAB/ reversal, multiple-baseline, changing-criterion, and alternating-treatment designs are acceptable. AB (i.e., baseline-intervention) designs, for example, are not considered.

Although CEC recognizes the important role that correlational, qualitative, and other descriptive research designs play in informing the field of special education, the standards do not consider research using these designs because identifying evidence-based practices involves making causal determinations, and causality cannot be reasonably inferred from these designs.

QUALITY INDICATORS

The intent of identifying quality indicators essential for methodologically sound, trustworthy intervention studies in special education is not to prescribe all the desirable elements of an ideal study but to enable special education researchers to determine which studies have the minimal methodological features to merit confidence in their findings. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.