Diane Ravitch recently posted: “New York’s Teacher of the Year Is Not Rated ‘Highly Effective” on her blog. In her post she writes about Kathleen Ferguson, the state of New York’s Teacher of the Year who has also received numerous excellence in teaching awards, including her district’s teacher of the year award.
Ms. Ferguson, despite the clear consensus that she is an excellent teacher, was not even able to evidence herself as “highly effective” on her evaluation largely because she teaches a disproportionate number of special education students in her classroom. She recently testified on her and other teachers’ behalves about these rankings, their (over)reliance on student test scores to define “effectiveness,” and the Common Core to a Senate Education Committee. To see the original report and to hear a short piece of her testimony, please click here. “This system [simply] does not make sense,” Ferguson said.
In terms of VAMs, this presents evidence of issues with what is called “concurrent-related evidence of validity.” When gathering concurrent-related evidence of validity (see full definition on the Glossary page of this site), it is necessary to assess, for example, whether teachers who post large and small value-added gains or losses over time are the same teachers deemed effective or ineffective, respectively, over the same period of time using other independent quantitative and qualitative measures of teacher effectiveness (e.g., external teaching excellence awards like in the case here). If the evidence that is presented points in the same direction, evidence of concurrent validity is supported, and this adds to the overall argument in support of overall “validity.” Inversely, if the evidence that is presented does not point in the same direction, evidence of concurrent validity is not supported, further limiting the overall argument in support of overall “validity.”
Only when similar sources of evidence support similar inferences and conclusions can our confidence in the sources as independent measures of the same construct (i.e., teaching effectiveness) increase.
This is also highlighted in research I have conducted in Houston. Please watch this video (of about 12 minutes) to investigate/understand “the validity issue” further, through the eyes of four teachers who also had similar stories but were terminated due to their value-added scores that, for the most part, contradicted other indicators of their effectiveness as teachers. You can also read the original study here.
Interesting video. As an arts educator, I am always uneasy with the classifying of different types of teachers. Eligible or Ineligible… “Group A” or “Group B” as it is in Arizona….No matter where you go approximately 30% of the teachers are responsible for 100% of the academic accountability and in many states that small percentage of teachers also influences the level of effectiveness for many teachers at a school. What better way to reform public education than to further silo teachers in subjects and now in professional groups as a means to grow students. In Arizona, if you are a “Group B” teacher, better hope you have strong “Group A” teachers on your campus because at some levels your ranking of effectiveness is dependent on the aggregate “A” teacher scores.