I just came across this 3-minute video that you all might/should find of interest (click here for direct link to this video on YouTube; click here to view the video’s original posting on Stanford’s Center for Opportunity Policy in Education (SCOPE)).
Featured is Stanford’s Professor Emeritus – Dr. Edward Haertel – describing what he sees as two major flaws in the use of VAMs for teacher evaluation and accountability. These are two flaws serious enough, he argues, to prevent others from using VAM scores to make high-stakes decisions about really any of America’s public school teachers. “Like all measurements, these scores are imperfect. They are appropriate and useful for some purposes, but not for others. Viewed from a measurement perspective, value-added scores have limitations that make them unsuitable for high-stakes personnel decisions.”
The first problem is the unreliability of VAM scores which is attributed to noise from the data. The effect of a teacher is important, but weak when all of the other contributing factors are taken into account. The separation of the effect of a teacher from all the other effects is very difficult. This isn’t a flaw that can be fixed by more sophisticated statistical models; it is innate to the data collected.
The second problem is that the models must account for bias. The bias is the difference in circumstances faced by a teacher in a strong school and a teacher in a high-needs school. The instructional history of a student includes out of school support, peer support, and the academic learning climate of the school and VAMs do not take these important factors into account.