A college-based teacher preparation program undertook a grant-supported evaluation of its' curriculum focused on preparing teacher candidates for integrating technology into instruction. The project employed a pre posttest design, including the use of the 54-item Educator's Knowledge and Implementation of Technology instrument (EKIT), which provides information regarding technology-related capabilities summarized in five area subscores and a total score. Results of the study substantiated the usefulness of such instrumentation in a pre-posttest design for evaluating program impact on students, and for prioritizing areas for continuous program improvement based on low achievement and unsustained growth. Conclusions clearly establish the power of the pre-posttest design for the evaluation and continuous improvement of teacher training programs.
Educators struggle with two demands that cause them to lose sleep (Blasik, 2002a, 2002b; Lewis 2002; Shaha, 2002). First is the need to prove that their programs are effective by validating them based on results, and second is the need for continuous program improvement. Sources of educational funding come with the program validation requirements focused on tangible student outcomes. In the age of increasing accountability, program funding and continuity are accompanied by expectations of proof that offerings are beneficial for student learning and meet objectives and requirements.
The second demand centers on the need to continuously improve instruction and the impacts it achieves. Continuous improvement reflects a clear focus on identifying where things need to improve, followed by a systematic approach to implementing program changes designed to remediate prioritized weaknesses and thereafter measure impact (Arcaro, 1995; Brown, 2001; Boulmetis, 2000; Ross, 1993). Ongoing program improvement is integral to any project with an evaluation component or that seeks to achieve lasting success (Quinones & Kirshstein, 1998; Smith, 2002). The overall objective is to achieve ever-higher levels in tangible measures of educational success.
Assessment, well designed and executed, helps educators resolve the demands for validation and continuous improvement (c.f. Baldrige, 2002; Brown, 2001; Walton, 1986). Through appropriate assessment approaches, educators can make goals and objectives tangible and evaluate whether they have been achieved and to what degree (Daniels, 2002; Shaha, 1997; Smith, 2002). Educators can then identify areas of success and strength, and isolate and prioritize areas for improvement. Clearly validation and improvement are best achieved when outcomes and desired results are clearly identified translated into plans, and then converted into instruments designed to gather the requisite information regarding success (Arcaro, 1995; Baldrige, 2002; Quinones, 1998; Stevens, 2001). To show gains in performance and program improvement, organizations must measure impacts and outcomes, and critically examine the results to achieve excellence (Blasik, 2002a, 2002b; Daniels, 2002; Shaha, 1997).
Increases in accountability and the demand for continuous improvement have also affected programs focused on preparing teachers to better use technology and incorporate it into instruction. In 1999, Utah Valley State College (UVSC) received grant funding from the U.S. Department of Education through Preparing Tomorrow's Teachers to Use Technology (PT3). The application required a substantial evaluation component. UVSC's evaluation plan included a quantitative pre-posttest design utilizing a number of assessment tools to measure the impacts of technology instruction on teacher candidates. Findings from the resulting assessment tools were used to determine what was learned and retained, and then to identify and implement program improvements focused on enhancing learning outcomes. The results of the assessments were designed and leveraged to improve the teacher education program (Farnsworth, 2002). …