Academic journal article Exceptional Children

Pasteur's Quadrant as the Bridge Linking Rigor with Relevance

Academic journal article Exceptional Children

Pasteur's Quadrant as the Bridge Linking Rigor with Relevance

Article excerpt

Evidence-based practices (EBPs), or instructional practices validated by scientifically based research, have been promoted as key components of educational reform (Cook, Smith, & Tankersley, 2011; Earles-Vollrath, 2012). To ensure that EBPs really do cause desired changes in student outcomes, they are derived from research studies that demonstrate high internal validity, such as methodologically rigorous randomized controlled trials. Such studies may be difficult to conduct in typical classroom settings (Berliner, 2004) and often involve atypical supports, such as external funding and highly trained interventionists. Accordingly, EBPs frequently represent efficacious practices shown to work under ideal conditions, rather than effective practices that work in typical conditions. Although efficacy studies may advance the science of education, they alone may have limited immediate utility for practitioners (Dijkers, 2011).

In contrast to EBPs, practice-based evidence (PBE) focuses on mining evidence from the typical experiences of practitioners (Barkham, Hardy, & Mellor-Clark, 2010; Simons, Kushner, Jones, & James, 2003). Whereas EBPs emphasize the internal validity and rigor of supporting research--what Hildreth and Kimble (2002) called explicit "know-what" research--PBE emphasizes external validity and relevance of information, or tacit "know-how" research. PBE can be derived from research studies that investigate the effectiveness of practices in various real-life settings and explore how practitioners use and adapt practices in different contexts. Quasi-experimental, single-case, qualitative, and case-study research designs might all be used to formally investigate PBE (Dijkers, 2011). Less formally, practitioners create their own PBE by conducting action research and informally collecting data on student performance. This research, however, has less support in the community of education scientists.

The No Child Left Behind Act of 2001 (NCLB, 2006) and the Individuals With Disabilities Education Improvement Act (IDEA, 2006) reflect the trend of basing educational practice on sound scientific evidence such as EBPs (Cook & Schirmer, 2006; Cook, Tankersley, & Landrum, 2009). This seems appropriate because scientific research is generally recognized as the best tool for discerning practices that cause improved student outcomes (Earles-Vollrath, 2012; Odom et al., 2005). However, despite considerable resources devoted to conducting and synthesizing experimental research to identify what works, EBP research appears to have limited impact on practice (American Enterprise Institute for Public Policy Research, 2007; Cook & Smith, 2012; Epstein, 2010; Glasgow, Lichtenstein, & Marcus, 2003; Shriver, 2007; Sindelar, Shearer, Yendol-Hoppy, & Liebert, 2006; Vaughn, Klingner, & Hughes, 2000). Difficulties translating the identification of research-based strategies into common practice and improved outcomes are not unique to education, having also been reported in such areas as prevention and health (Chamberlain, Brown, & Saldana, 2011; Glasgow et al., 2003; Hiss, 2004), child welfare (Aarons & Palinkas, 2007), and mental health services (Hoagwood, Burns, Kiser, Ringeisen, & Schoenwald, 2001).

Despite the increasing focus placed on EBPs in both general and special education, practitioners appear to value PBE as the primary determinant of what and how to teach (Nelson, Leffler, & Hansen, 2009; Simons et al., 2003). Practitioners want to know whether a practice will work for their students and whether it can be implemented in the realities of their classroom and school context (e.g., limited time and resources). They tend not to be concerned with ruling out all possible alternative explanations for changes in student outcomes. In other words, practitioners typically are focused on external validity and relevance rather than internal validity and rigor (Dijkers, 2011; Nelson et al. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.