Two weeks ago I wrote a post about the passage of the “Every Student Succeeds Act” (ESSA), and its primary purpose to reduce “the federal footprint and restore local control, while empowering parents and education leaders to hold schools accountable for effectively teaching students” within their states. More specific to VAMs, I wrote about how ESSA will allow states to decide how to weight their standardized test scores and decide whether and how to evaluate teachers with or without said scores.
Doug Harris, an Associate Professor of Economics at Tulane in Louisiana and, as I’ve written prior, “a ‘cautious’ but quite active proponent of VAMs” (see another two posts about Harris here and here) recently took to an Education Week blog to respond to ESSA, as well. His post titled, “NCLB 3.0 Could Improve the Use of Value-Added and Other Measures” can be read with a subscription. For those of you without a subscription, however, I’ve highlighted some of his key arguments below, along with my responses to those I felt were most pertinent here.
In his post, Harris argues that one element of ESSA “clearly reinforces value-added–the continuation of annual testing.” He equates these (i.e., value-added and testing) as synonyms, with which I would certainly disagree. The latter precedes the former, and without the latter the former would not exist. In other words, it’s what one does with the test scores, whereby test scores do not just mean by default the same thing as VAMs.
In addition, while the continuation of annual testing is written into ESSA, the use of VAMs is not, and their use for teacher evaluations, or not, is now left to all state’s to decide. Hence, it is true that ESSA “means the states will no longer have to abide by [federal/NCLB] waivers and there is a good chance that some states will scale back aggressive evaluation and accountability [measures].”
Harris discusses the controversies surrounding VAMs, as argued by both VAM proponents and critics, and he contends in the end that “[i]t is difficult to weigh these pros and cons… because we have so little evidence on the [formative] effects of using value-added measures on teaching and learning.” I would argue that we have plenty of evidence on the formative effects of using VAMs, given the evidence dating back now almost thirty years (e.g, from Tennessee). This is one area of research where I do not believe we need more evidence to support the lack of formative effects, or rather instructionally informative benefits to be drawn from VAM use. Again, this is largely due to the fact that VAM estimates are based on tests that are so far removed from the realities of the classroom, that the tests as well as the VAM estimates derived thereafter are not “instructionally sensitive.” Should Harris argue for value-added using “instructionally senstive” tests, perhaps, this suggestion for future research might carry some more weight or potential.
Harris also discusses some “other ways to use value-added” should states still decide (e.g., within a flagging system whereby VAMs as the more “objective” measure could be used to raise red flags for individual teachers who might require further investigation using other less “objective” measures). Given the millions of taxpayer revenues it would require to do even this, again for only 30% of all teachers who are primarily elementary school teachers of primary subject areas, cost should certainly be of note on this one. This suggestion also overlooks VAMs still prevalent methodological and measurement issues, and how these issues should also likely prevent VAMs from even playing a primary role as the key criteria used to flag teachers. This is also not warranted, from an educational measurement standpoint.
Harris continues that “The point is that it would be a shame if we went back to a no-feedback world, and even more of a shame if we did so because of an over-stated connection between evaluation, accountability, and value-added.” In my professional opinion, he is incorrect on both of these points: (1) Teachers are not (really) using the feedback they are getting from their VAM reports, as described prior, and for a variety of reasons including transparency, usability, actionability, etc.; hence, we are nowhere nearer to some utopian “feedback world” than we were pre-VAMs. All of the related research points to observational feedback, actually, to be the most useful when it comes to garnering and using formative feedback. (2) There is nothing “over-stated” in terms of the “connection between evaluation, accountability, and value-added.” Rather, what he terms as possibly “over-stated” is actually real, in practice, and overly-abused versus stated. This is not just something related to semantics.
Finally, the one point upon which we agree, with some distinction, is that ESSA “is also an opportunity because it requires schools to move beyond test scores in accountability. This will mean a reduced focus on value-added to student test scores, but that’s OK if the new measures provide a more complete assessment of the value [defined in much broader terms] that schools provide.” ESSA, then, offers the nation “a chance to get value-added [defined in much broader terms] right and correct the problems with how we measure teacher and school performance.” If we define “value-added” in much broader terms, even if test scores are not a part of the “value-added” construct, this would likely be a step in the right direction.