Susan Moore Johnson – Professor of Education at Harvard University and author of another important article regarding how value-added models (VAMs) oft-reinforce the walls of “egg-crate” schools (here) – recently published (along with two co-authors) an article in the esteemed, peer-reviewed Educational Evaluation and Policy Analysis. The article titled: Investing in Development: Six High-Performing, High-Poverty Schools Implement the Massachusetts Teacher Evaluation Policy can be downloaded here (in its free, pre-publication form).
In this piece, as taken from the abstract, they “studied how six high-performing, high-poverty [and traditional, charter, under state supervision] schools in one large Massachusetts city implemented the state’s new teacher evaluation policy” (p. 383). They aimed to learn how these “successful” schools, with “success” defined by the state’s accountability ranking per school along with its “public reputation,” approached the state’s teacher evaluation system and its system components (e.g., classroom observations, follow-up feedback, and the construction and treatment of teachers’ summative evaluation ratings). They also investigated how educators within these schools “interacted to shape the character and impact of [the state’s] evaluation” (p. 384).
Akin to Moore Johnson’s aforementioned work, she and her colleagues argue that “to understand whether and how new teacher evaluation policies affect teachers and their work, we must investigate [the] day-to-day responses [of] those within the schools” (p. 384). Hence, they explored “how the educators in these schools interpreted and acted on the new state policy’s opportunities and requirements and, overall, whether they used evaluation to promote greater accountability, more opportunities for development, or both” (p. 384).
They found that “despite important differences among the six successful schools [they] studied (e.g., size, curriculum and pedagogy, student discipline codes), administrators responded to the state evaluation policy in remarkably similar ways, giving priority to the goal of development over accountability [emphasis added]” (p. 385). In addition, “[m]ost schools not only complied with the new regulations of the law but also went beyond them to provide teachers with more frequent observations, feedback, and support than the policy required. Teachers widely corroborated their principal’s reports that evaluation in their school was meant to improve their performance and they strongly endorsed that priority” (p. 385).
Overall, and accordingly, they concluded that “an evaluation policy focusing on teachers’ development can be effectively implemented in ways that serve the interests of schools, students, and teachers” (p. 402). This is especially true when (1) evaluation efforts are “well grounded in the observations, feedback, and support of a formative evaluation process;” (2) states rely on “capacity building in addition to mandates to promote effective implementation;” and (3) schools also benefit from spillover effects from other, positive, state-level policies (i.e., states do not take Draconian approaches to other educational policies) that, in these cases included policies permitting district discretion and control over staffing and administrative support (p. 402).
Related, such developmental and formatively-focused teacher evaluation systems can work, they also conclude, when schools are lead by highly effective principals who are free to select high quality teachers. Their findings suggest that this “is probably the most important thing district officials can do to ensure that teacher evaluation will be a constructive, productive process” (p. 403). In sum, “as this study makes clear, policies that are intended to improve schooling depend on both administrators and teachers for their effective implementation” (p. 403).
Please note, however, that this study was conducted before districts in this state were required to incorporate standardized test scores to measure teachers’ effects (e.g., using VAMs); hence, the assertions and conclusions that authors set forth throughout this piece should be read and taken into consideration given that important caveat. Perhaps findings should matter even more in that here is at least some proof that teacher evaluation works IF used for developmental and formative (versus or perhaps in lieu of summative) purposes.
Citation: Reinhorn, S. K., Moore Johnson, S., & Simon, N. S. (2017). Educational Evaluation and Policy Analysis, 39(3), 383–406. doi:10.3102/0162373717690605 Retrieved from https://projectngt.gse.harvard.edu/files/gse-projectngt/files/eval_041916_unblinded.pdf
When will there be a study to determine if teachers trust the content-specific pedagogical knowledge of the people who evaluate them? In theory, a peer review system might work, but managing that requires some teachers to be free of classroom obligations. Perhaps because I work in the arts, I see the hazards in the delegation of evaluations to principals and to observation checklists that are too often keyed to strictly academic instruction in the manner of sage on the stage. I see little attention to evaluation appropriate for an environment where studio instruction is central, diverse outcomes from projects are valued, and “mastery” of short term, easy to identify objectives are less important than these attributes of instruction appear to be in most evaluation checklists. This study also begins with a limited concept of a “high performing” school. It is also Insufficiently critical of the MET studies.
There are so many problems with the VAM score and its use that it is difficult to know where to begin. The critical problem stems from the fact that jurisdictions use a predictive model to predict student outcomes, then base teacher evaluations on differences between the predicted and actual outcomes. When building a model, one goal is to minimize residuals. In other words, the creation of the model focuses on increasing model accuracy. When residuals become too great, the correct interpretation is not to assume that the error is due to teacher performance. The correct interpretation IS that the model must be recalibrated. Assuming that the model’s error is the result of teacher performance is a false assumption.