Massachusetts Also Moving To Remove Growth Measures from State’s Teacher Evaluation Systems

Please follow and like us:

Since the passage of the Every Student Succeeds Act (ESSA) last January, in which the federal government handed back to states the authority to decide whether to evaluate teachers with or without students’ test scores, states have been dropping the value-added measure (VAM) or growth components (e.g., the Student Growth Percentiles (SGP) package) of their teacher evaluation systems, as formerly required by President Obama’s Race to the Top initiative. See my most recent post here, for example, about how legislators in Oklahoma recently removed VAMs from their state-level teacher evaluation system, while simultaneously increasing the state’s focus on the professional development of all teachers. Hawaii recently did the same.

Now, it seems that Massachusetts is the next at least moving in this same direction.

As per a recent article in The Boston Globe (here), similar test-based teacher accountability efforts are facing increased opposition, primarily from school district superintendents and teachers throughout the state. At issue is whether all of this is simply “becoming a distraction,” whether the data can be impacted or “biased” by other statistically uncontrollable factors, and whether all teachers can be evaluated in similar ways, which is an issue with “fairness.” Also at issue is “reliability,” whereby a 2014 study released by the Center for Educational Assessment at the University of Massachusetts Amherst, in which researchers examined student growth percentiles, found the “amount of random error was substantial.” Stephen Sireci, one of the study authors and UMass professor, noted that, instead of relying upon the volatile results, “You might as well [just] flip a coin.”

Damian Betebenner, a senior associate at the National Center for the Improvement of Educational Assessment Inc. in Dover, N.H. who developed the SGP model in use in Massachusetts, added that “Unfortunately, the use of student percentiles has turned into a debate for scapegoating teachers for the ills.” Isn’t this the truth, to the extent that policymakers got a hold of these statistical tools, after which they much too swiftly and carelessly singled out teachers for unmerited treatment and blame.

Regardless, and recently, stakeholders in Massachusetts lobbied the Senate to approve an amendment to the budget that would no longer require such test-based ratings in teachers’ professional evaluations, while also passing a policy statement urging the state to scrap these ratings entirely. “It remains unclear what the fate of the Senate amendment will be,” however. “The House has previously rejected a similar amendment, which means the issue would have to be resolved in a conference committee as the two sides reconcile their budget proposals in the coming weeks.”

Not surprisingly, Mitchell Chester, Massachusetts Commissioner for Elementary and Secondary Education, continues to defend the requirement. It seems that Chester, like others, is still holding tight to the default (yet still unsubstantiated) logic helping to advance these systems in the first place, arguing, “Some teachers are strong, others are not…If we are not looking at who is getting strong gains and those who are not we are missing an opportunity to upgrade teaching across the system.”

3 thoughts on “Massachusetts Also Moving To Remove Growth Measures from State’s Teacher Evaluation Systems

  1. Chester et al have recent grants from Bill Gates that may help to explain why VAM stays in place. The architecture of the accountability system would have to be modified, and there is not a simpple replacement suitable for all teachers in all grades. Note also the letter to the editor of Educational Researcher about VAM and the wrong statement that VAM is ok if eight conditions are met. Steven Klees asserts VAMs are never “Accurate, Reliable, Valid.”
    Why couldn’t AERA say that?

  2. While MA takes growth measures out of teacher evaluation it is making them a central component of education provider evaluations. Programs will be judged on the SGP of the students of first year teachers who complete their programs. It is part of a ramping up of preservice requirements, supposedly to improve the preparation of first year teachers – the goal being that they will perform as well as third year teachers – especially in schools with high numbers of low income students where many first year teachers are employed.

    Meanwhile, the same MA political forces are working hard to open more charter schools full of low income kids where teachers need not have a single day of preservice education and training. They don’t even have to pass the educator tests, the MTELs, until the end of their first year. Cognitive dissonance anyone?

Leave a Reply

Your email address will not be published. Required fields are marked *