Magazine article The Learning Professional

Getting to Impact: Using the Evaluation Standard to Measure Results

Magazine article The Learning Professional

Getting to Impact: Using the Evaluation Standard to Measure Results

Article excerpt

I serve on a local nonprofit foundation board that awards grants to community organizations that sponsor programs to discourage teen pregnancy. Over the years, the foundation has developed research-based rubrics for scoring grant applications. But recently, board members noted that agencies seeking funding don't describe the impact of their program on the youth they serve.

During a visit to one of the foundation's grant recipients, I was reminded of NSDC's standard on evaluation: Staff development that improves the learning of all students uses multiple sources of information to guide improvement and demonstrate its impact.

The agency I visited has had a program for nine years that focuses on helping high-risk students with personal development, health plans, healthy dating habits, and pregnancy prevention. Staff are genuinely engaged and concerned about young people, but the organization has no real assessment of how much difference it is making. The agency had recently implemented a pre- and post-test, but the results provided only anecdotal evidence of what was occurring as a result of the staff's efforts.

The organizations funding depends on whether it can demonstrate the need and value of its program because the foundation is carefully scrutinizing and rating applicants to determine where to award grant money. The foundation now asks applicants, "What outcomes do you hope to achieve with this program? How are these outcomes measured?" The foundation wants specific and measurable outcomes, data measuring stated outcomes, and evidence that the agencies have analyzed and improved their efforts based on data. The foundation now looks for strong evidence (e.g. specific numbers) that the program is successful and has achieved its proposed outcomes. In the current environment, schools are undergoing similar scrutiny with calls for increased accountability. As professional developers, we can lead the march toward effective program evaluation, beginning with our own staff development programs.

Although professional developers are aware of NSDC's Standards for Staff Development (NSDC, 2001), many of us running professional development programs do not evaluate whether we are getting results - the impact of what teachers are learning on student learning.

Professional development leaders are capable of doing far more evaluation than we do. All programs can be evaluated, although perhaps not all at the same level of sophistication. Evaluation is a matter of intentionality. Just a few short years ago, many educators were frightened by the idea of examining student data. Central office evaluation staff usually explained student data to school staffs. Now, more educators are proficient in examining student data after learning through professional development what data means and how to analyze it. Our next emphasis should be on learning to create effective evaluations using the various data we collect.

Many who run school improvement programs, educational workshops, and professional development activities, just as in the case of community initiatives, use evaluation terminology but still rely largely on anecdotal evidence. Paper and pencil pre- and post-tests are an improvement from the absence of any evaluation, but these tests often fail to determine to what extent the skills learned were transferred to new practices that lead to improved achievement - the effect of the program on the ultimate goal of student learning.

In 2000, after a series of projects led by Joellen Killion, NSDC's director of special projects, NSDC concluded that leaders of most staff development programs fail to plan for and to evaluate the impact of programs on student learning. Killion immersed herself in evaluation theory and research, going beyond educational evaluation to examine cutting-edge evaluation work in the medical and community organization fields.

Killion (2002, p. 132) said almost anyone could begin to evaluate program effectiveness by developing evaluation think, which she defined as a frame of reference, a mindset, a set of analytical skills. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.