For the past several decades, considerable scientific and policy interest and research activity have focused on developing evidence-based practices and programs, evidence-informed practices and programs, and other innovations intended to produce better outcomes for exceptional children. Past and current efforts to diffuse, translate, transport, disseminate, mandate, incentivize, and otherwise close the "science-to-service gap" have not been successful in getting the growing list of evidence-based programs routinely into practice. D. L. Fixsen, Naoom, Blase, Friedman, and Wallace (2005) defined evidence-based programs as
collections of practices that are done within known parameters (philosophy, values, service delivery structure, and treatment components) and with accountability to the consumers and funders of those practices .... Such programs, for example, may seek to integrate a number of intervention practices (e.g., social skills training, behavioral parent training, cognitive behavior therapy) within a specific service delivery setting (e.g., office-based, family-based, foster home, group home, classroom) and organizational context (e.g., hospital, school, not-for-profit community agency, business) for a given population (e.g., children with severe emotional disturbances, adults with co-occurring disorders, children at risk of developing severe conduct disorders). (p. 26)
In an extensive review of the diffusion and dissemination literature, Greenhalgh, Robert, MacFarlane, Bate, and Kyriakidou (2004) characterized many of the past and current approaches as "letting it happen" or "helping it happen" (p. 593). That is, researchers publish their findings and leave it to others to read the research and make good use of the evidence-based program. Some program developers also publish manuals, create web sites, and offer workshops to make available more detailed information to potential practitioners and others to help it happen. Although these predominant letting it happen and helping it happen approaches are necessary, they are not sufficient for reliably producing intended outcomes of research in practice (e.g., Balas & Boren, 2000; Clancy, 2006).
Greenhalgh et al. (2004) identified another, newer category of activity they called "making it happen" (p. 593). In this group of activities, purveyors (i.e., developers who are supporting the use of a particular evidence-based program) and other implementation teams take responsibility for supporting practitioners, supervisors, and managers as they attempt to make full and effective uses of evidence-based programs and other innovations in their daily interactions with children, families, and stakeholders. Greenhalgh et al. concluded that "A striking finding of this extensive review was the tiny proportion of empirical studies that acknowledged, let alone explicitly set out to study, the complexities of spreading and sustaining innovation in service organizations" (p. 614). That is changing. The content for the making it happen approaches identified by Greenhalgh et al. is being operationalized in current research on implementation. For example, D. L. Fixsen et al. (2005) conducted an extensive review of the implementation evaluation literature, and Blase, Fixsen, Naoom, and Wallace (2005) provided qualitative reviews of best practices in use by successful purveyor groups and implementation teams. The science base for implementation is growing out of these more organized and testable "best practices" for implementation. The field is on the verge of having evidence-based implementation methods to reliably realize the promise of evidence-based programs in practice to benefit exceptional children, their families, and society. Society, in this context, means the universe of social relationships that bind together human beings.
Based on this body of work, a formula for successful uses of evidence-based programs in typical human service settings can be characterized as:
Effective interventions x effective implementation = improved outcomes
Note that the formula for success involves multiplication. If interventions are not effective (intervention = zero) then the intended outcomes will not be achieved. If implementation supports are not effective (implementation = zero), then the intended outcomes will not be achieved. A recent budget analysis (Clancy, 2006) reported that the National Institutes of Health spent 99% of its funding on developing new interventions and 1% on supports for their implementation. The Institutes of Education Sciences at 3.6% has made a slightly greater investment in implementation (Institute of Education Sciences, 2010). The lack of funding to improve the effectiveness of implementation supports may help to explain the much-discussed science-to-service gap, the "quality chasm," and how "Phase Two Translation stumbles unguided towards a very uneven, extraordinarily incomplete, and socially disappointing state of affairs" (Hiss, 2004, p. 13).
The lesson is clear: If society wants a good return on its considerable investment in research to develop evidence-based programs, society will need to invest in implementation science and best practices. What does this mean for state and federal governments and human service systems? The answer lies in establishing a framework for developing implementation capacity for the full and effective uses of evidence-based programs statewide (i.e., "scaling up" evidence-based programs to the state level).
Currently there is no commonly accepted definition of scaling or scaling up in human services or other fields (A. A. M. Fixsen, 2009). However, to achieve the goal of using evidence-based programs to improve human services it seems that a critical mass would need to be achieved to produce socially significant benefits. Our estimation is that the threshold for scaling an evidence-based program is the point at which at least 60% of the service units in a system are using the program with fidelity and good outcomes. At the 60% point the system itself would need to have changed to accommodate, support, and sustain the outcomes of the evidence-based program and demonstrate the promised benefits to society.
[FIGURE 1 OMITTED]
POLICY AND PRACTICE TO SUPPORT IMPLEMENTATION OF EVIDENCE-BASED PROGRAMS: A FRAMEWORK
Figure 1 illustrates a way to conceptualize making use of evidence-based programs on a socially significant scale. This is a simultaneous bottom-up and top-down approach for implementing evidence-based programs and creating hospitable system environments in which they can flourish. The full and effective uses of interventions statewide require reinventing organizations and systems to develop and host the required implementation infrastructure, as well as the interventions themselves; both are necessary.
IMPLEMENTATION TEAM, TEACHERS, INNOVATIONS, AND STUDENTS
The approach begins with implementation of evidence-based programs. As shown at the bottom of Figure 1, an implementation team (Higgins, Weiner, & Young, 2012) works with teachers to help them learn the new ways of work that define an evidence-based program. With ongoing support from the implementation team, teachers and staff (practitioners) make full and effective use of the evidence-based program or other innovation in their interactions with students to produce the intended benefits.
Implementation team members have special expertise regarding evidence-based programs, implementation science and practice, improvement cycles, and organization and system change methods. They are accountable for "making it happen": for assuring that effective interventions and effective implementation methods are in use to produce intended outcomes for children and families. Typically, implementation teams are developed on site with support from groups outside the organization or system (i.e., the "external supports for system change" in Figure 1).
The components shown in the bottom part of Figure 1 (i.e., implementation team, teachers, innovations, and students) are the essential foundation for all that follows. If evidence-based programs are not being implemented reliably to produce effective outcomes, there is nothing useful to scale up in a state. State officials and others need to remain focused on this level until this essential foundation has been established. On the other hand, if state officials stop at this level they will have created one more "island of excellence" that can be admired but will not produce socially significant outcomes for all children and families who could benefit.
STATE MANAGEMENT TEAMS AND POLICIES THAT SUPPORT EFFECTIVE PRACTICE
Resources for human services are precious and must be used wisely to maximize real benefits to as many children and families as possible. In an examination of the results of public policy, Schofield (2004) found that
The majority of the literature concerning the implementation of public policy assumes that public managers can carry out new policy initiatives regardless of the behavioural, cognitive or technical demands that the introduction of such policies may make upon them. There has been a tendency to assume that managers actually have the detailed technical knowledge by which to enact such new policies. (p. 283)
She described implementation expertise as the crucial missing element in human service systems.
Increasingly, state and federal governments and district leaders (state management teams) are supporting and occasionally insisting upon the use of evidence-based programs in human services. They are developing policies and funding structures to enable the work of the implementation teams who are working with practitioners (e.g., teachers and staff) to use evidence-based programs to benefit children and families (the arrow on the left side of Figure 1).
PRACTICE-INFORMED CHANGES IN POLICY
In our examination of the literature and discussions with system change agents internationally, there are many examples where state management teams have mandated the use of evidence-based programs with little impact on service delivery (e.g., Chapin Hall Center for Children, 2002; O'Donoghue, 2002). There are some examples where evidence-based programs were used with good outcomes, but only for a limited period of time (e.g., Bryce et al., 2010; Glennan, Bodilly, Galegher, & Kerr, 2004). There also are a few examples of success where policies encouraged the use of effective innovations at the practice level, the innovations were supported with effective implementation efforts, and systems changed to encourage widespread use of the innovation (e.g., Glennan et al., 2004; Khatri & Frieden, 2002).
What differentiates scale-up successes from those with temporary or no outcomes? The practice-to-policy communication loop (the arrow on the right side of Figure 1) seems to be a critical feature of successful efforts to implement evidence-based programs on a socially significant scale. In successful system change efforts, state management teams frequently (at least monthly) hear about what is helping or hindering the efforts to make full and effective use of evidence-based programs at the practice level (Khatri & Frieden, 2002). The information may consist of descriptions of experiences and may include data collected with reasonable precision. Onyett, Rees, Borrill, Shapiro, and Boldison (2009) noted that
There is a need to develop capacity for delivering such whole systems interventions wherein thinking can be challenged, issues about authority and the exercise of power candidly explored, and where participants can continue to learn and adapt to ever-changing circumstances. (p. 11)
The practice-to-policy communication loop provides the opportunities to carry out these critical functions.
Based on regular feedback from the practice level, state management teams have data that can drive their decision-making to change the human service system itself. Based on the information from implementation team members, state management teams can reduce systems barriers to implementation and strengthen the facilitators to achieve the desired outcomes for children. Current system functions have evolved over time and often do not work in concert to produce desired outcomes. For example, staff may be selected by human resource department recruitment ads and interviews, trained through a statewide system for accumulating continuing education credits, supervised by administrative staff who are not familiar with innovations, and evaluated annually using generic job performance criteria. Funding may be distributed according to a population formula unrelated to needs or opportunities to build capacity. Leaders may be hired without regard for their support for evidence-based programs that are in place and currently are producing student benefits. Over time, functions critical to achieving desired student outcomes have come to operate in silos and are not integrated to effectively and efficiently produce valued staff competencies or student outcomes (Manna, 2008).
System change occurs when state leadership responds to these issues by altering funding streams, modifying staff certification standards and agency accreditation standards, shifting accountability measures to include intervention and implementation outcomes, changing position descriptions and salary scales, re-negotiating union contracts, solving transportation and supply issues, and so on--all designed to better support work at the practice level so that the desired benefits can be realized reliably. De-siloing and defragmenting current human service systems and more fully aligning system functions with system goals is a daunting task. However, it can be achieved by state management teams that are informed and engaged in the process of supporting effective practices and demonstrated outcomes.
EXTERNAL SUPPORTS FOR SYSTEM CHANGE
External supports for system change (see left side of Figure 1) play a key role at every level. The old adage that a lawyer who represents himself has a fool for a client is applicable here: Participants in current systems have a difficult time creating the conditions for system change. It is difficult for those immersed in a system "to monitor and question the context in which it is operating and to question the rules that underlie its own operation" (Morgan & Ramirez, 1983, p. 15), which are conditions for organizational and system change (Marzano, Waters, & McNulty, 2005). Given the scarcity of implementation expertise, external support groups are especially useful for creating the first-generation implementation teams to provide implementation supports for teachers and staff. The external support group can facilitate the creation and use of the practice-to-policy communication loop and help prepare the state management team to make changes in the various departments and units within the system to remove barriers and strengthen facilitators for producing desired outcomes for students and others.
Nord and Tucker (1987), Khatri and Frieden (2002), Klein (2004), Barber and colleagues (2009), and others have noted the critical role of external facilitation when attempting to produce systemic change. In studying the implementation of complex innovations, Nord and Tucker were surprised that external facilitation overcame the expected influences of organizational culture, climate for change, and existing staff competencies. They concluded that external facilitation helps assure "fledgling ideas are not crushed by the established routines." (p. 25). In education, groups such as the Center on Innovation & Improvement (www.centerii.org), the U.S. Department of Education's Technical Assistance Center on Positive Behavioral Interventions & Supports (www.pbis.org), and the State Implementation and Scaling up of Evidence-based Practices (SISEP) Center (www.scalingup.org) are examples of external systems change support organizations that provide purposeful external facilitation to help systems initiate change and manage change processes.
USING THE POLICY AND PRACTICE FRAMEWORK
The framework presented in Figure 1 outlines the critical features for the full and effective use of evidence-based programs statewide. There are complex relationships among the features of the framework in practice. Why are implementation and system change so difficult? The answers may be as complex as the issues themselves.
IMPLEMENTATION TEAM, TEACHERS, INNOVATIONS, STUDENTS--AND "WICKED PROBLEMS"
Implementing evidence-based programs in typical practice settings is not for the faint of heart. Chaos theory, complexity theory, and ecological theories only begin to describe the "wicked problems" that await those who attempt to change practices, organizations, and systems. Rittel and Webber (1973) described wicked problems as those that are difficult to define and that fight back when you try to solve them. That is, interests vested in the system-as-is suddenly appear and typically deter attempts to change the system. In addition, attempts to change a system expose faulty operating assumptions and gaps in system functions (Marzano et al., 2005; Morgan & Ramirez, 1983) that stand in the way of systems accomplishing their legislated/stated goals.
INNOVATIONS: DEFINING A PROGRAM
What is an evidence-based program or other effective innovation? Dane and Schneider (1998) summarized reviews of over 1,200 outcome studies and found that investigators assessed the presence or strength (fidelity) of the independent variable (the intervention) in about 20% of the studies, and about 5% of the studies used those assessments analyzing outcome data. This is a critical omission. Without information on the presence and strength of the independent variable, it is difficult to interpret the outcomes in any study (Dobson & Cook, 1980). The lack of description and specification of programs (evidence-based or otherwise) is not a trivial matter. The Individuals With Disabilities Education Act (2006) requires "the use of scientifically based instructional practices, to the maximum extent possible" (20 U.S.C. [section] 1400[c][E]) to benefit children with exceptional needs. For implementation of evidence-based programs, one often is left wondering what "it" is that practitioners such as teachers and staff are supposed to use to benefit children statewide.
The current literature supporting evidence-based programs focuses heavily on "evidence." Any graduate student now can recite one or more definitions of evidence-based programs with criteria that relate to the rigor and number of randomized control trials or single case designs done under stringent conditions by multiple investigators (e.g., What Works Clearinghouse, http://ies.ed.gov/ncee/wwc; Best Evidence Encyclopedia, http://www.bestevidence.org). The criteria for evidence are important for building confidence about outcomes, but what defines a "program"? There are no commonly accepted definitions or criteria related to the independent variable (the intervention) in those gold standard studies.
The problem is that practitioners do not use well described standards of experimental rigor in their interactions with children and families; they use programs. The lack of adequately defined programs is an impediment to implementation with good outcomes for exceptional children and others (e.g., Hall & Hord, 2011). Figure 2 outlines criteria for defining a program; to our knowledge, this is the first time such criteria have been published. The criteria are based on extensive experience with evidence-based program development and implementation and ongoing reviews of the literature.
FIGURE 2 Criteria for Defining a Program 1. Clear description of the program a. Clear philosophy, values, and principles i. The philosophy, values, and principles that underlie the program provide guidance for all treatment decisions, program decisions, and evaluations and are used to promote consistency, integrity, and sustainable effort across all provider organization units. b. Clear inclusion and exclusion criteria that define the population for which the program is intended i. The criteria define who is most likely to benefit when the program is used as intended. 2. Clear description of essential functions a. There is a clear description of the features that must be present to say that a program exists in a given location (essential functions sometimes are referred to as core intervention components, active ingredients, or practice elements). 3. Operational definitions of the essential functions a. Practice profiles describe the core activities that allow a program to be teachable, learnable, and doable in practice; and promote consistency across practitioners (e.g., teachers and staff) at the level of actual service delivery (Hall & Hord, 2011). 4. A practical assessment of the performance of practitioners who are using the program a. The performance assessment relates to the program philosophy, values, and principles; essential functions; and core activities specified in the practice profiles; and is practical and can be done repeatedly in the context of typical human service systems. b. Evidence that the program is effective when used as intended. i. The performance assessment (fidelity) is highly correlated with intended outcomes for children and families.
These criteria are fairly straightforward. Programs that can become standard practice in education and other human service domains need to be clearly described so they can be taught, learned, and implemented with good outcomes. However, given the paucity of information about programs, as described by Dane and Schneider (1998), Durlak and DuPre (2008), and others, implementation teams and external support groups may have to search to find the information (e.g., philosophy, values, and principles) or may have to do original work to satisfy some of the criteria (e.g., establish a measure of fidelity). Implementation teams are accountable for "making it happen" at the practice level--and that requires knowing what "it" is that must be done well to produce the desired results for students and others. If the program developers did not specify what "it" is they have investigated, then the implementation team and the external support group must fill in the gaps related to the criteria for a program.
IMPLEMENTATION OF EVIDENCE-BASED PROGRAMS
Once a program is defined and operationalized, teachers and other practitioners need to learn how to use the program as intended. Evidence-based programs and other innovations typically represent new ways of work, which must be taught, learned, and used in practice if children with exceptional needs are to benefit. In this regard, the "letting it happen" and "helping it happen" approaches (Greenhalgh et al., 2004) have not been very productive in education or other fields. For example, Aladjem and Borman (2006) and Vernez, Karam, Mariano, and DeMartini (2006) examined the implementation supports and outcomes for a few thousand schools that were attempting to make effective use of one or more evidence-based comprehensive school reforms. The authors of these studies found that in Years 1 to 3 fewer than half of the teachers received the prescribed training and about half of those teachers received any of the follow-up coaching as specified by the developers of the school reforms. In Years 4 and 5, fewer than 10% of the schools were using the evidence-based reforms as intended. Consequently, most of the students had no opportunity to benefit from the evidence-based programs because the programs were never used as intended (an implementation failure, not an intervention failure).
Active (making it happen) methods for implementing evidence-based programs and other innovations produce higher rates of success more quickly. For example, D. L. Fixsen, Blase, Timbers, and Wolf (2001) reported 80% success in about 3 years with implementation teams using active methods. In contrast, Balas and Boren (2000) reported 14% success after about 17 years without the use of implementation teams. Implementation teams with the requisite knowledge and expertise, however, are not common.
Scaling up interventions to produce socially significant outcomes for all children who could benefit requires first scaling up implementation capacity. Implementation capacity can be assessed by the number of competent implementation teams available in a state that are engaged in using active implementation methods. Given the general lack of implementation capacity in state systems, it is difficult to know how many implementation teams eventually may be needed.
Implementation happens in discernable stages (implementation stages), and there are common components of successfully implemented programs (implementation drivers). Active implementation methods incorporate best practices related to the stages of implementation (i.e., exploration, installation, initial implementation, and full implementation) and implementation drivers (i.e., competency, organization, and leadership). Implementation best practices have been derived from concept mapping and nominal group meetings with those who have been implementing evidence-based programs successfully for several years (Blase et al., 2005), and implementation stages and drivers established as a result of an extensive review and synthesis of the implementation evaluation literature (D. L. Fixsen et al., 2005; Wallace, Blase, Fixsen, & Naoom, 2008). It should be noted that implementation stages and drivers are not linear or separate; each is embedded in the other in interesting combinations (see Figure 3).
The active implementation frameworks apply at every level of an education or other human service system. The data supporting the frameworks and their applications in multiple settings have been described in detail (D. L. Fixsen et al., 2005; D. L. Fixsen, Blase, Naoom, & Wallace, 2009; D. L. Fixsen, Blase, Duda, Naoom, & Van Dyke, 2010). In this article, we illustrate the descriptions of implementation stages and drivers with original information regarding the application of the active implementation frameworks at the state level. The state-level information is derived from the work of the SISEP Center, which began working with four states in 2008. The purpose of the SISEP Center is to develop implementation capacity so that states can make full and effective use of evidence-based programs and other effective innovations in schools statewide. (In Figure 1, the SISEP Center would be the "external support for system change.")
Exploration Stage. Exploration stage activities include shared communication about the strengths and needs of a system or organization, the possible evidence-based programs that might help to produce improved outcomes, assessment of the implementation drivers needed to support staff members, resources required and their sources, and so on. The result of this stage is common understanding and acceptance of the intervention and the required implementation supports, and a collective decision to proceed. Creating readiness for change in individuals and organizations is an important part of the work and effectiveness of implementation teams. Prochaska, Prochaska, and Levesque (2001) found that only about 20% of individuals and organizations in their studies were ready for change. Thus, creating readiness is an essential function for statewide uses of evidence-based programs and begins in the exploration stage. A key exploration stage outcome is buy-in and support from relevant stakeholders for the proposed new ways of work (e.g., evidence-based programs). The length of the exploration stage depends on the resources allocated (e.g., time, people), access to information, and authority to make decisions. Exploration stage work often can take 1 to 2 years.
[FIGURE 3 OMITTED]
To begin its exploration work with states to develop an infrastructure for implementation of a variety of evidence-based programs in education, the SISEP Center e-mailed information to state education agencies, technical assistance centers that worked with states, and federal departments with an interest in having states use evidence-based programs. The e-mails invited state leaders to participate in conference calls about scaling interventions and system change in education. The first conference call provided general information, and the second focused on the difficulties of initiating change and managing change processes in large systems, the challenges that might arise, and the role of the Center in helping states work through the challenges to build capacity for change. The key state education leaders in 36 states that participated in these calls were invited to submit a 10-page (maximum) application responding to five selection criteria (i.e., leadership, extent of current use of evidence-based programs, availability of statewide data systems, resources devoted to evidence-based programs, and willingness to participate in a multistate community of practice). Sixteen states submitted applications. Independent reviews and ratings of the application information by two or more of the co-directors of the Center (Dean Fixsen, Karen Blase, Rob Horner, and George Sugai) produced two groups: eight states that scored higher on each of the criteria and eight states that scored lower on one or more of the criteria. The largest differences in ratings occurred for the criteria related to the current use of evidence-based programs and leadership.
Subsequent on-site visits at the state departments of education with each of the eight top-rated states allowed the parties (state officials, stakeholders, and SISEP) to meet each other, share information, address key issues, and answer questions. At least two members of SISEP were present for each on-site visit (one person participated in all site visits). They compared notes and impressions immediately after each visit and communicated their findings to the overall SISEP staff group.
Of the eight top-ranked states, one state withdrew (citing leadership issues) and one state did not convene a stakeholder group, leaving six states. Each of the remaining six states met the criteria for selection. Consequently, those six states, representing a total of 14,984 schools, were selected to be part of the scaling up initiative. Of these, the SISEP Center had the capacity to begin work promptly with four states.
Installation Stage. The installation stage involves acquiring or developing the resources needed to fully and effectively engage in the new ways of work; the need for these resources is discussed and agreed upon during the exploration stage. During the installation stage the state needs to deliver on the promises. Resources and activities during installation are focused on creating new job descriptions, establishing interview methods and preparing interviewers to select staff to do the new work, employing people to do the work, developing data collection sources and protocols, access to timely training, and so on. Organizations often think of evidence-based programs as "plug and play" and are surprised by the need for preparation and resources (D. L. Fixsen et al., 2005). Many attempts to use evidence-based programs end at this stage. Implementation teams help states and other organizations anticipate these needs and help them prepare for the next stage.
During the installation stage, the implementation drivers come into play. Competency drivers include staff selection, training, coaching, and performance assessment (fidelity). Organization drivers include facilitative administration, decision support data system, and systems interventions. Leadership drivers include technical and adaptive leadership to assure a persistent and integrated approach to change and performance in the system.
The work of the SISEP Center illustrates installation-stage activities at a state level. The installation stage flowed from the site visit conducted during the exploration stage as part of the selection process. During the state selection site visit the scaling-up enterprise was outlined and the state's commitment of people and resources was reviewed, discussed, and agreed to. Capacity building requires people who can learn the intricacies of the practice and science of implementation, organization change, and system transformation. People represent resources a state must allocate/reallocate to create a new infrastructure to support implementation and scaling up evidence-based programs and other innovations. In the exploration stage the SISEP Center emphasized that each state would need to invest about $2 million to $3 million in the initial stages of scaling up. Most of the states already were investing that much or more in various initiatives in their state (e.g., supports for literacy programs, science and math programs, positive behavior intervention and support programs, response-to-intervention approaches).
The SISEP Center assigned a staff person as State Scaling Coordinator for each selected state; each coordinator was highly skilled and experienced in implementation, organization change, and system reinvention. The SISEP coordinator visits the state each month for 2 to 3 days and corresponds frequently with state personnel between monthly visits. For the first several months, the coordinator's discussions during the monthly visits with the state management team were organized around a list of goals. The first item that was discussed at each meeting was criteria for selecting and securing the necessary in-state staff: two state transformation specialists (one from general education, one from special education) and nine members for the first regional implementation team. The state transformation specialists are designated as the leaders for the statewide scaling efforts. The regional implementation team is the first group to actually begin the work of building implementation capacity in the first few districts within a region in the state. The "deadline" was to have nearly all of these 11 people identified about four months into the installation process, and in place and ready to attend the Scaling Up and Implementation Institute in Chapel Hill, North Carolina (staff training driver), about seven months into the installation process.
Without the people in place, capacity development cannot occur; the capacity to scale up evidence-based programs resides in people who have the knowledge, skills, and abilities to do this new kind of work. The selection process for state transformation specialists and regional implementation team members was based on recruiting instate staff members who already were doing successful work with one initiative or another (staff selection driver). This was the reason for having current evidence-based program implementation as a state selection criterion: It helped assure the availability of implementation-experienced staff for scaling.
In three of the four states the focus on staffing resulted in a return to exploration-stage discussions about the need for staff, availability of resources, and so on. This is typical of implementation work, where work at one stage results in questioning the need for change or resources. After 7 months, all four states had their full complement of state transformation specialists and regional implementation team members present for the Scaling Up and Implementation Institute.
Initial Implementation Stage. During the initial implementation stage staff are attempting to use newly learned skills in the context of an organization that is just learning how to change to accommodate and support the new ways of work. This is the most fragile stage where the awkwardness associated with trying new things and the difficulties associated with changing old ways of work are strong motivations for giving up and going back to education as usual.
External supports for system change are essential to success during the initial implementation stage. As the external support group, the SISEP Center is helping to develop the knowledge and skills of the state transformation specialists and the regional implementation team members while helping the state management team members adjust organization roles and functions to align with the program. Much of this work is done in the context of helping the first few districts develop implementation capacity and begin supporting the use of evidence-based programs in their schools and classrooms. The implementation drivers are designed to develop competence and confidence of staff, create hospitable organizational environments, and develop leadership to address the variety of challenges that face any effort to change systems and practices within systems.
As the Center began working with states it was clear that the participation of the state management team was uneven across the states. This resulted in a return to exploration-stage discussions of leadership functions and a reconsideration of the mutual decision to proceed (or not) with scaling activities in two states. Highly involved leadership is critical to the success of implementation and system change. In the future, more time during the exploration stage will be devoted to state management team scheduling and functioning to clarify current practices and typical schedules, and to secure agreement about which state management team members will attend the monthly meetings with SISEE
The monthly on-site visits provide opportunities for SISEP staff to work with the state management team and coach (an implementation driver), state transformation specialists, and members of the regional implementation teams. Evaluations of implementation in practice also provide an assessment of fidelity of implementation capacity development (an implementation driver). The work to develop district implementation capacity has begun in each state. This process was slowed as the Center learned how to map current implementation capacity in districts, build on current implementation strengths, and conduct action planning in the midst of multiple demands on district staff. The state transformation specialists and members of the regional implementation teams are currently conducting exploration-stage activities and beginning installation stage activities with districts that are ready to proceed.
As illustrated by the SISEP Center work, implementation stages and drivers apply at each level of activity. They guide the work of the Center with in-state staff, the work of state transformation specialists with regional implementation teams, and the work of regional implementation teams to develop district implementation capacity. Soon they will guide the work of districts with buildings and teachers. These iterative teams define the infrastructure for implementation and other innovations to significantly improve education for all students statewide.
Full Implementation Stage. In the full implementation stage the new ways of providing implementation supports to districts, schools, and teachers will be the standard ways of work, where teachers and staff routinely provide high-quality services to exceptional children. The external supports for system change can fade out at this point. However, the state transformation specialists and the regional implementation teams remain essential contributors to the ongoing success of using evidence-based programs and other effective innovations in all schools. State leaders, teachers, staff, administrators, and district leaders come and go, and each new person needs to develop the competencies to effectively support student education. The use of the active implementation supports is essential to producing the intended outcomes with one group of students after another, one group of teachers after another, one group of district leaders and staff after another, and one group of state leaders and staff after another, year after year.
PRACTICE-POLICY COMMUNICATION LOOP
The arrow on the right side of Figure 1 links the practice level to the state management team, the same state management team that originated the policies that enabled the implementation of evidence-based programs at the practice level. The practice-policy communication loop is a reflective interface between practice and policy, where feedback regarding information sent out (policies that enable change in practices) returns into the component from which it originated (practices that inform policies). This feedback loop is critical to developing a supportive educational system and hospitable conditions for the new ways of work.
It is a truism that current systems are perfectly "designed" to produce their current results (Beer, Eisenstat, & Spector, 1990). If the results are to change, the system needs to change to support the methods to produce different results. Without the support of state transformation specialists, regional implementation teams, and state management teams, the current system will overwhelm virtually any attempt to use new evidence-based programs or other innovative ways of work. There is no "intention" for education systems to function this way: It is the nature of systems (e.g., Ashby, 1962; Green, 1980).
To promptly resolve "wicked problems" and other issues that arise once scaling is underway, a state management team must establish open communication channels so information from the experiences of those doing the new work in schools and districts can come back to them on a regular basis. Practitioners often encounter barriers to full and effective implementation that can only be resolved at the systems level. Some barriers include lack of resources for implementation supports (training, coaching, performance assessments), hiring policies that do not support employment of district superintendents with experience using evidence-based programs, and using federal funds for changes in curriculum but not improvements in instruction (Scott, 2008). Without an effective practice-policy communication loop, state management teams do not have the immediate and reliable information they need to make good decisions and improve upon past decisions.
Finally, state management must respond to the reflective information by making constructive changes in the system to better support the work at the practice level (e.g., Barber & Fullan, 2005). Helping state management teams change their roles and functions and change their relationships with other components of the systems (e.g., districts, stakeholders) is part of the work of the external support for system change (e.g., the function provided by SISEP for the states). The changes that are realized through this process result in reinvention of the education system itself. Functions, roles, and structures are changed to better accommodate and support work at the practice level.
Ulrich (2002) has described current systems producing current outcomes as legacy systems that are the result of "decades of quick fixes, functional enhancements, technology upgrades, and other maintenance activities [that] obscure application functionality to the point where no one can understand how a system functions" (pp. 41-42). Ulrich was describing computer software applications but he might as well be describing education and human service systems. We bring computer science into the discussion not only to make the point that every system has problems with fragmentation, dysfunction, and wasted resources, but also to make another critical point. When legacy software systems containing billions of lines of code were transformed to more efficiently and effectively produce desired new results, computer scientists found that about 80% of the "old" system still remained in the "new" system when the transformation was complete (Ulrich, 2002). The problem was that no one could look at the code in advance and know what to keep and what to change. Those decisions came from making changes, seeing the results, then modifying or keeping the components being examined-exactly the function of the practice-policy feedback loop.
When changes are made in education and human services the system responds and the responses can help to separate the functional components from the dysfunctional or nonfunctional components of complex education systems. This process results in system improvements only if the system leaders are attending to the relevant outcomes. Figure 4 outlines a protocol for establishing and using a practice-policy communication loop. Learning from the results of change while change is occurring is the purpose of the practice-policy communication loop. With the practice-policy communication loop in place, state transformation specialists and regional implementation team members make regular (at least monthly) reports to state management regarding facilitators (keep those) and impediments (change/discard those) encountered while attempting to implement evidence-based programs in districts, schools, and classrooms. The reflective information helps to identify the 20% that needs to change and the 80% that should be retained, strengthened, and closely aligned with desired outcomes at the practice level.
As we have introduced these ideas and methods to state management teams in education, many leaders have expressed enthusiastic support. Executives especially appreciate having a credible source of critical information (the state transformation specialists and regional implementation team members, and any district staff they bring with them) that is delivered in manageable doses (monthly) accompanied by suggestions for constructive alternatives.
Individuals providing external support for system change (e.g., SISEP) help to establish the practice-policy communication loop (Klein, 2004). The initial (exploration stage) interactions between SISEP staff and the state management team (as described in Figure 4) are intended to assure mutual understanding and agreement to proceed. Thus, the practice-policy communication cycle begins with a state management team that is ready and willing to participate. During on-site meetings, the SISEP staff help establish a protocol that guides the implementation team members (and others) in prioritizing and providing data to the state management team. The SISEP staff also guide state management team members regarding how to affirm the information being delivered and act constructively during the monthly meetings and between meetings to bring about systemic change. It is the role of the external support group to initiate and help manage the practice-policy communication process so that it functions as intended.
EXPANDING IMPLEMENTATION CAPACITY
The current work of the SISEP Center in education has helped shape the components depicted in Figure 1 and is guided by that framework. Where do the funds come from to support implementation teams? There are two sources of funding. First, there is growing evidence that evidence-based programs implemented effectively in practice not only produce better outcomes, but also save money for human service systems. Summaries by Khatri and Frieden (2002), Rosenheck, Neale, Leaf, Milstein, and Frisman (1995), Schoenwald (2010), the Washington State Institute for Public Policy (1998), and Wensing, Wollersheim, and Grol (2006) are among the cost analyses showing the savings that occur when evidence-based programs are used to produce actual benefits. The savings come in the form of increased benefits and reduced costs for later services. Durlak and DuPre (2008) estimated that evidence-based programs used with acceptable fidelity have effect sizes that are three to 12 times greater than those used with low fidelity. These effect sizes translate to greater benefits to children and families. This represents the added value of effective implementation supports: If services are delivered effectively, then costs of related services go down. For example, students who learn how to read do not need reading tutors or other supports for literacy throughout their education careers. The reduction in need for extra services represents savings for schools and districts.
The second source of funding for implementation capacity development is the result of the practice-policy communication loop. As the education system is changed, the silos and fragments become integrated into a more coherent system that is more precisely focused on achieving intended outcomes (e.g. Barber & Fullan, 2005). Staff functions are repurposed, system units are redefined, and structures are realigned to support implementation of evidence-based programs or other innovations so that student outcomes are improved in every school. The result is a more effective and more efficient education system. Savings result from using accountability data to improve implementation supports rather than expanding compliance units, using practice-policy communications to discontinue policies and practices that consume resources and are not helpful, and combining fragmented units into integrated units that are more precisely and effectively focused on professional development and continuing education in support of evidence-based programs in a state.
The savings from the early implementation teams can be used to fund the next generation of implementation teams until all districts and schools in the state are covered. Combining the cost-benefits of evidence-based programs with the savings from system reinvention is an example of the "virtuous circles" described by Fox and Gershman (2000) and Putnam (1993) in other contexts where efficiencies are the product of a singular focus on effectiveness. Once the process begins, it can feed itself.
A problem in human services is that services cannot be shut down, retooled, and then restarted in a new form. Students pour out of school buses and into classrooms every day whether teachers and staff are ready or not. Any new evidence-based program must be implemented in the context of continuing to provide education services using current methods. Thus, during the first year or two, there will be extra costs for establishing implementation capacity to assure effective uses of the evidence-based services in order to produce the intended good outcomes. After that, the savings from "Generation 1" can help to finance ensuing expansions of implementation capacity (Barber & Fullan, 2005).
A related problem in human services is that governments continue to invest heavily in "evidence-based programs" and "innovations" with out first investing in the development of the capacity to implement those interventions fully and effectively. This leaves education systems and others without the funds to invest in the first implementation teams and leaves education in its current state of trying many innovations and succeeding only occasionally, and only for a while. When establishing new initiatives, we recommend that policy makers and funders set aside a minimum of 15% of the funds for the development and operation of active implementation supports for the new initiative. This modest investment could substantially improve student outcomes for decades to come.
In the United States there are (approximately) 60 million students being taught by 6 million teachers and staff nested in 90,000 schools located within 15,000 districts that reside within 58 jurisdictions (e.g., states and territories; Weiss, Knapp, Hollweg, & Burrill, 2002). This includes more than 6 million exceptional children with disabilities (U.S. Department of Education, 2008) in schools throughout the education system. Emphasizing the supporting science of evidence-based programs is necessary but not sufficient for producing socially important outcomes for exceptional children and others nationally. Exceptional children will benefit when programs are defined and operationalized; effective implementation supports are available to all teachers and staff; and practice-policy communication loops are in place to defragment, de-silo, and align education system components with effective practices. We have presented a framework to guide the establishment of policy and practice to achieve statewide improvements in student outcomes. The framework is based on the best available information from literature reviews, system change agents, and successful efforts to scale up evidence-based programs and other innovations in a variety of fields.
Reviews of student (Grigg, Daane, Jin, & Campbell, 2003) and adult literacy (Kutner et al., 2007) provide markers for the lack of progress in improving education outcomes. Literacy scores have changed very little since 1971 even though innumerable education reforms have come and gone, the U.S. Department of Education was created and elevated to a Cabinet position in the federal government, and funding has increased dramatically over the past 40 years. During that time, few things in American society have remained as stable as literacy scores for students (hovering around 220 on a 500-point scale). In a briefing report on school improvement, Jerald (2005) stated,
As thousands of administrators and teachers have discovered too late, implementing an improvement plan--at least any plan worth its salt--really comes down to changing complex organizations in fundamental ways.... Unfortunately, educational researchers, policymakers, and leaders have consistently failed to acknowledge and communicate the importance of [the] crucial implementation stage in the school improvement process. Indeed, given the emphasis on planning--and relative silence about implementation--in many of the guidebooks and tools meant to help with school improvement, school leaders can easily come away with the impression that if a team gets the plan right, successful implementation of the plan must surely follow. (p. 2)
Perhaps it is time to invest in implementation capacity so that evidence-based programs and other innovations will have a chance to produce their promised results for students, especially those with special needs.
We want to thank Rob Horner and George Sugai, Co-Directors of the SISEP Center, who are an endless source of intelligence and support. We also thank our National Implementation Research Network colleagues--Leah Bartley, Michelle Duda, Sandra Naoom, and Barbara Sims--who provide continual inspiration and delight, and recognize the many contributions of Jennifer Coffey, the OSEP Project Officer for the State Implementation and Scaling Up of Evidence-Based Practices Center. Her ideas and support have added considerably to the content described in this article. Finally, we thank Bryan Cook and Sam Odom for their editorial comments and detailed suggestions for revisions. They helped bring order out of the chaos we originally presented to them.
Manuscript received April 2012; accepted July 2012.
Aladjem, D. K., & Borman, K. M. (Eds.). (2006). Examining comprehensive school reform. Washington, DC: Urban Institute Press.
Ashby, W. R. (1962). Principles of the self-organizing system. In H. V. Foerster & J. G. W. Zopf (Eds.), Principles of self organization: Transactions of the University of Illinois symposium (pp. 255-278). London, England: Pergamon Press.
Balas, E. A., & Boren, S. A. (2000). Managing clinical knowledge for health care improvement. In J. Bemmel & A. T. McCray (Eds.), Yearbook of medical informatics 2000: Patient-centered systems. (pp. 65-70). Stuttgart, Germany: Schattauer Verlagsgesellschaft.
Barber, M., Darling-Hammond, L., Elmore, R., Jansen, J., Levin, B., Noguera, P., ... Tucker, M. (2009). Change wars. Bloomington, IN: Learning Tree.
Barber, M., & Fullan, M. (2005). Tri-level development: Putting systems thinking into action. Education Weekly, 24(25), 34-35.
Beer, M., Eisenstat, R. A., & Spector, B. (1990). Why change programs don't produce change. Harvard Business Review, 68(6), 158-166.
Blase, K. A., Fixsen, D. L., Naoom, S. E, & Wallace, E (2005). Operationalizing implementation: Strategies and methods. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute.
Bryce, J., Gilroy, K., Jones, G., Hazel, E., Black, R. E., & Victora, C. G. (2010). The accelerated child survival and development programme in West Africa: A retrospective evaluation. The Lancet, 375(9714), 572-582. http://dx.doi.org/10.1016/S0140-6736(09)62060-2
Chapin Hall Center for Children. (2002). Evaluation of family preservation and reunification programs. Chicago, IL: Author.
Clancy, C. (2006). The $1.6 trillion question: If we're spending so much on healthcare, why so little improvement in quality? Medscape General Medicine, 8(2), 58.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18, 23-45. http://dx.doi.org/10.1016 /S0272-7358(97)00043-3
Dobson, L., & Cook, T. (1980). Avoiding type III error in program evaluation: Results from a field experiment. Evaluation and Program Planning, 3, 269-276. http://dx.doi.org/10.1016/0149-7189(80)90042-7
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350. http://dx.doi.org/10.1007/s10464-008-9165-0
Fixsen, A. A. M. (2009). Scaling up educational programs: Barriers and facilitators from the perspective of state leaders. Portland, OR: School of Social Work, Portland State University.
Fixsen, D., & Blase, K. (2009). Scaling up innovation [Webinar]. Chapel Hill, NC: SISEP Center.
Fixsen, D. L., Blase, K., Duda, M., Naoom, S., & Van Dyke, M. (2010). Implementation of evidence-based treatments for children and adolescents: Research findings and their implications for the future. In J. Weisz & A. Kazdin (Eds.), Implementation and dissemination: Extending treatments to new populations and new settings (2nd ed., pp. 435-450). New York, NY: Guilford Press.
Fixsen, D. L., Blase, K. A., Naoom, S. E, & Wallace, E (2009). Core implementation components. Research on Social Work Practice, 19, 531-540. http://dx.doi.org/10.1177/1049731509335549
Fixsen, D. L., Blase, K. A., Timbers, G. D., & Wolf, M. M. (2001). In search of program implementation: 792 replications of the teaching-family model. In G. A. Bernfeld, D. P. Farrington & A. W. Leschied (Eds.), Offender rehabilitation in practice: Implementing and evaluating effective programs (pp. 149-166). London, England: Wiley.
Fixsen, D. L., Naoom, S. E, Blase, K. A., Friedman, R. M., & Wallace, E (2005). Implementation research: A synthesis of the literature (FMHI Publication No. 231). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network.
Fox, J., & Gershman, J. (2000). The world bank and social capital: Lessons from ten rural development projects in the Philippines and Mexico. Policy Sciences, 33, 399-419. http://dx.doi.org/10.1023/A:1004897409300
Glennan, T. K. Jr., Bodilly, S. J., Galegher, J. R., & Kerr, K. A. (2004). Expanding the reach of education reforms. Santa Monica, CA: RAND. Green, T. (1980). Predicting the behavior of the educational system. Syracuse, NY: Syracuse University Press.
Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82, 581-629. http://dx.doi.org/10.1111/j.0887-378X.2004.00325.x
Grigg, W. S., Daane, M. C., Jin, Y., & Campbell, J. R. (2003). The nation's report card: Reading 2002. Washington, DC: National Center for Education Statistics. Retrieved from http://nces.ed.gov/nationsreportcard/pdf/main2002/2003521.pdf
Hall, G. E., & Hord, S. M. (2011). Implementing change: Patterns, principles and potholes (3rd ed.). Boston, MA: Allyn & Bacon.
Higgins, M., Weiner, J., & Young, L. (2012). Implementation teams: A new lever for organizational change. Journal of Organizational Behavior. http://dx.doi.org/10.1002/job.1773
Hiss, R. G. (2004). Translational research--Two phases of a continuum. In National Institutes for Health and the Center for Disease Control and Prevention (Ed.), From clinical trials to community: The science of translating diabetes and obesity research (pp. 11-14). Bethesda, MD: National Institutes of Health.
Individuals with Disabilities Education Act, 20 U.S.C. [section] 1400 etseq. (2006).
Institute of Education Sciences. (2010). Department of Education Institute of Education Sciences fiscal year 2010 request. Retrieved from http://www2.ed.gov/about/overview/budget/budget10/justifications/y-ies.pdf
Jerald, C. (2005, August). The implementation trap: Helping schools overcome barriers to change (Policy Brief). Washington, DC: The Center for Comprehensive School Reform and Improvement.
Khatri, G. R., & Frieden, T. R. (2002). Rapid DOTS expansion in India. Bulletin of the World Health Organization, 80, 457-463.
Klein, J. A. (2004). True change: How outsiders on the inside get things done in organizations. New York, NY: Jossey-Bass.
Kutner, M., Greenberg, E., Jin, Y., Boyle, B., Hsu, Y., & Dunleavy, E. (2007). Literacy in everyday life: Results from the 2003 National Assessment of Adult Literacy (NCES 2007-480). Washington, DC: National Center for Education Statistics.
Manna, P. (2008, November). Federal aid to elementary and secondary education: Premises, effects, and major lessons learned. Commissioned paper for the Center on Education Policy's Project to Rethink the Federal Role in Elementary and Secondary Education. Available: http://www.cep-dc.org/displayDocument.cfm?DocumentID=332
Marzano, R., Waters, T., & McNulty, B. (2005). School leadership that works: From research to results. Alexandria, VA: Association for Supervision and Curriculum Development.
Morgan, G., & Ramirez, R. (1983). Action learning: A holographic metaphor for guiding social change. Human Relations, 37, 1-28. http://dx.doi.org/10.1177/001872678403700101
Nord, W. R., & Tucker, S. (1987). Implementing routine and radical innovations. Lexington, MA: D. C. Heath and Company.
O'Donoghue, J. (2002). Zimbabwe's AIDS action programme for schools. Evaluation and Program Planning, 25, 387-396. http://dx.doi.org/10.1016/S0149-7189(02)00050-2
Onyett, S., Rees, A., Borrill, C., Shapiro, D., & Boldison, S. (2009). The evaluation oft local whole systems intervention for improved team working and leadership in mental health services. The Innovation Journal: The Public Sector Innovation Journal, 14(1), 1018.
Prochaska, J. M., Prochaska, J. O., & Levesque, D. A. (2001). A transtheoretical approach to changing organizations. Administration and Policy in Mental Health and Mental Health Services Research, 28, 247-261. http://dx.doi.org/10.10231A:1011155212811
Putnam, R. D. (1993). Making democracy work: Civic traditions in modern Italy. Princeton, NJ: Princeton University Press.
Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155-169. http://dx.doi.org/10.1007/BF01405730
Rosenheck, R., Neale, M., Leaf, E, Milstein, R., & Frisman, L. (1995). Multisite experimental cost study of intensive psychiatric community care. Schizophrenia Bulletin, 21, 129-140.
Schoenwald, S. K. (2010). From policy pinball to purposeful partnership: The policy contexts of MST transport and dissemination. In J. Weisz & A. Kazdin (Eds.), Evidence-based psychotherapies for children and adolescents (2nd ed., pp. 538-553). New York, NY: Guilford Press.
Schofield, J. (2004). A model of learned implementation. Public Administration, 82, 283-308.
Scott, C. (2008, September). A call to restructure restructuring: Lessons from the no child left behind act in five states. Washington, DC: Center on Education Policy. Retrieved from http://www.cep-dc.org/displayDocument.cfm?DocumentID=175
Ulrich, W. M. (2002). Legacy systems: Transformation strategies. Upper Saddle River, NJ: Prentice Hall.
U.S. Department of Education (2008). 30th annual report to Congress on the implementation of the individuals with disabilities education act, 2008. Washington, DC: Author. Retrieved from http://www2.ed.gov/about/reports/annual/osep2008/parts-b-c/30th-idea-arc.pdf
Vernez, G., Karam, R., Mariano, L. T., & DeMartini, C. (2006). Evaluating comprehensive school reform models at scale: Focus on implementation. Santa Monica, CA: RAND.
Wallace, F., Blase, K., Fixsen, D., & Naoom, S. (2008). Implementing the findings of research: Bridging the gap between knowledge and practice. Washington, DC: Education Research Service.
Washington State Institute for Public Policy. (1998). Watching the bottom line: Cost-effective interventions for reducing crime in Washington. Olympia, WA: Author.
Weiss, I. R., Knapp, M. S., Hollweg, K. S., & Burrill, G. (Eds.). (2002). Investigating the influence of standards: A framework for research in mathematics, science, and technology education. Washington, DC: National Academies Press.
Wensing, M., Wollersheim, H., & Grol, R. (2006). Organizational interventions to implement improvements in patient care: A structured review of reviews. Implementation Science, 1(1), 2. http://dx.doi.org/10.1186/1748-5908-1-2
Address correspondence concerning this article to Dean Fixsen, Frank Porter Graham Child Development Institute, University of North Carolina at Chapel Hill, Campus Box 8040, Chapel Hill, NC 27599 (e-mail: email@example.com).
Preparation of this article was supported, in part, by a grant from the U.S. Department of Education, #H326K080001. However, the contents do not necessarily represent the policy of the U.S. Department of Education, and endorsement by the Federal Government should not be assumed.
MELISSA VAN DYKE
University of North Carolina at Chapel Hill
ABOUT THE AUTHORS
DEAN FIXSEN (North Carolina CEC), Senior Scientist and Co-Director; KAREN BLASE (North Carolina CEC), Senior Scientist and Co-Director; ALLISON METZ, Associate Director; and MELISSA VAN DYKE, Associate Director, National Implementation Research Network, FPG Child Development Institute, University of North Carolina at Chapel Hill.
FIGURE 4 Protocol for Establishing and Using a Practice-Policy Communication Cycle Preparation 1. The external support group works of the state with the state management team to management verify understanding of and agreement to the process and ensure that information from the practice level will be received in a positive and constructive manner. team and 2. The external support group works others with the regional implementation team and practice-level staff to prioritize issues and concerns: what is most important right now, what produces leverage for other issues, and what is appropriate for state management team decision making (as opposed to decisions by other groups). 3. The external support group and the regional implementation team help practice level staff decide who should accompany implementation team members to present the issue to the state management team (can speak from experience about the issue and how it relates to implementation of the practice model to benefit students). 4. The external support group and the regional implementation team work with practice-level staff to make sure the message is delivered in a pleasant and constructive manner (fix the problem, not the blame). 5. The external support group assists the regional implementation team in preparing brief documents to be provided to the state management team (in advance of a meeting and with no surprises). Practice- Discussion 1. Goal statement: policy at the state What is the state information management management team/ process team meeting education system trying to accomplish? How does the current issue relate to that goal? How do the issues connect to policy or directives made by the state management team? 2. Problem statement: Name and describe the barrier/issue that is getting in the way of reaching the goal. 3. Options: What are some options that the practice/level staff/ implementation team members have identified (potential solu tions)? 4. Option benefits: What are the advantages and disadvantages of each option, from a practice point of view? How could students benefit or how could education be provided more effectively or efficiently? 5. Offer to help: From the practitioner staff and implementation team members. State 1. State management management team acknowledges team response the importance of the issue. 2. Question-and- answer period to assure understanding of the issue and options. 3. One person (or more) agrees to take responsibility for develop ing an action plan for solving the issue and coming back to the state management team for agreement. 4. Follow through with the solution. 5. Check back with the practice staff/ implementation team to see if "the solution" resolved the original issue. Note. Some practice issues brought to the state management team will be facilitators, not barriers. This is a way of keeping "what works" in front of everyone and to avoid inadvertent changes in facilitators.…