Academic journal article Teacher Education Quarterly

Predictions and Performance on the PACT Teaching Event: Case Studies of High and Low Performers

Academic journal article Teacher Education Quarterly

Predictions and Performance on the PACT Teaching Event: Case Studies of High and Low Performers

Article excerpt

Performance assessments are becoming an increasingly common strategy for evaluating the competency of pre-service teachers. In connection with standards-based reform and a focus on teacher quality, many states have moved away from relying on traditional tests or university supervisors' observations and have developed performance assessments as part of licensing requirements or accreditation of teacher education programs (Pecheone, Pigg, Chung, & Souviney, 2005). In California, legislation requires that teacher certification programs implement a performance assessment to evaluate candidates' mastery of specified teaching performance expectations (California Commission on Teacher Credentialing, 2006). The concerns about traditional tests used in licensing decisions center on the extent to which the tests are authentic and valid in identifying effective teaching (Mitchell, Robinson, Plake, & Knowles, 2001). Relying on university supervisors' classroom observations of candidates for summative judgments is also problematic due to issues of validity and reliability. A potential advantage of university supervisor assessments is that judgments are based on observation of candidates' actual teaching in classroom settings. However, observations may be conducted too infrequently, training of supervisors may be insufficient to achieve inter-rater agreement, and observation forms may not be tailored to specific disciplines or levels (Arends, 2006b). Researchers who investigated how teacher candidates were evaluated in student teaching across multiple types of teacher preparation institutions in the U.S. reported that summative judgments made from student teaching observation forms were unable to differentiate among various levels of effectiveness (Arends, 2006a).

When performance assessments include evidence from teaching practice, they can provide more direct evaluation of teaching ability than pencil-and-paper licensure tests or completion of coursework (Mitchell et al., 2001; Pecheone & Chung, 2006; Porter, Youngs, & Odden, 2001). But, as with traditional tests and supervisor observations, concerns about the reliability and predictive validity of performance assessments must be resolved (Pecheone & Chung, 2006). Other concerns about performance assessments center on the effects on curriculum and the richness of teacher education programs, potential harm to relationships essential for learning, competing demands, and the significant amount of human and financial resources required (Arends, 2006a; Delandshere & Arens, 2001; Snyder, 2009; Zeichner, 2003). A particularly pressing issue is the high cost of developing and implementing performance assessments during periods of funding shortages (Guaglianone, Payne, Kinsey, & Chiero, 2009; Porter, Youngs, & Odden, 2001). The ongoing costs, in terms of financial support and faculty time, lead teacher educators to question if resources could be better spent in other ways (Snyder, 2009). If performance assessments provide little information beyond what university supervisors gain through formative evaluations and classroom observations of candidates, then the high costs, in combination with other concerns, may seem less justifiable.

In an earlier study, my co-author and I explored the extent to which supervisors' perspectives about candidates' performance corresponded with outcomes from a summative performance assessment (Sandholtz & Shea, 2012). We specifically examined the relationship between university supervisors' predictions and candidates' performance on the Performance Assessment for California Teachers (PACT) teaching event. We found that university supervisors' predictions of their candidates' performance did not closely match the PACT scores and that inaccurate predictions were split between over- and under-predictions. In addition, supervisors did not provide more accurate predictions for high and low performers than other candidates. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.