Academic journal article Teacher Education Quarterly

Examining the Extremes: High and Low Performance on a Teaching Performance Assessment for Licensure

Academic journal article Teacher Education Quarterly

Examining the Extremes: High and Low Performance on a Teaching Performance Assessment for Licensure

Article excerpt

In all types of performances, ranging from athletic competitions to theatrical events, even casual observers typically recognize the particularly stellar or poor performers. For trained observers, such as athletic scouts or theater critics, identifying the exceptional performers at both ends of the continuum tends to be the easiest part of the job. Similarly, in assessing the teaching practice of preservice teacher candidates, we expect that observers, particularly trained observers, will readily identify those who are exceptionally effective or ineffective. We anticipate that university supervisors and mentor teachers will agree on who demonstrates extraordinary performance for a preservice candidate and who needs additional preparation before taking on solo classroom teaching responsibilities. We assume that candidates who exhibit outstanding skills in student teaching will excel on a teaching performance assessment and that those who fail the assessment will be those who struggle in student teaching.

Given that both teaching performance assessments and university supervisors' observations include direct evaluation of teaching practice, we anticipate agreement in identifying high and low performers. Identifying weak candidates is particularly critical to ensuring that beginning teachers do not earn licenses until they are competent and ready to teach full time. Both university supervisors' observations and teaching performance assessments aim to evaluate the competency of preservice teacher candidates, and both approaches prompt concerns among teacher educators about their use for licensing decisions. Given the importance of summative judgments about teacher candidates, concerns about the reliability and validity of both approaches are paramount. Researchers find that summative judgments based on student teaching observations fail to differentiate among levels of effectiveness (Arends, 2006a). Similarly, concerns about the reliability and predictive validity of teaching performance assessments need to be resolved (Pecheone & Chung, 2006) before moving to widespread adoption. In addition, both approaches require substantial financial and human resources. In times of funding shortages, questions arise about the need to conduct both performance assessments and supervisor evaluations, particularly if both approaches reach the same conclusion about a candidate's readiness for licensure.

In an earlier study, we explored the extent to which university supervisors' perspectives about candidates' performance corresponded with outcomes from a summative performance assessment (Sandholtz & Shea, 2012). We specifically examined the relationship between supervisors' predictions and teacher candidates' performance on a summative assessment based on a capstone teaching event, part of the Performance Assessment for California Teachers (PACT). We opted to compare predictions and performance for three reasons. First, all of the supervisors were trained scorers of PACT. Because the training, calibrating, predicting, and scoring took place within a 2-week period, the supervisors were in a mind-set that aligned with the PACT ratings of effective teaching. Using the PACT scoring as a basis for determining readiness to teach was fitting for that time period and appropriate for making predictions of performance. Second, supervisors did not use a standard instrument during classroom observations, and they did not all complete observations during the same week. Consequently, using predictions and scores allowed us to make comparisons for a large number of candidates with a single instrument from the same point in time. Third, the process of making predictions did not significantly impose on the supervisors' workloads yet provided supervisors' judgments about candidates' readiness for licensure at that point in the year.

In contrast to expectations, we found that university supervisors' predictions of their candidates' performance did not closely match the PACT scores and that inaccurate predictions were split between over- and underpredictions (for complete findings, see Sandholtz & Shea, 2012). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.