Special Issue of “Educational Researcher” Examines Value-Added Measures (Paper #1 of 9)

ShareTweet about this on TwitterShare on Facebook5Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

A few months ago, the flagship journal of the American Educational Research Association (AERA) – the peer-reviewed journal titled Educational Researcher (ER) – published a “Special Issue” including nine articles examining value-added measures (VAMs) (i.e., one introduction (reviewed below), four feature articles, one essay, and three commentaries). I will review each of these pieces separately over the next few weeks or so, although if any of you want an advanced preview, do click here as AERA made each of these articles free and accessible.

In this “Special Issue” editors Douglas Harris – Associate Professor of Economics at Tulane University – and Carolyn Herrington – Professor of Educational Leadership and Policy at Florida State University – solicited “[a]rticles from leading scholars cover[ing] a range of topics, from challenges in the design and implementation of teacher evaluation systems, to the emerging use of teacher observation information by principals as an alternative to VAM data in making teacher staffing decisions.” They challenged authors “to participate in the important conversation about value-added by providing rigorous evidence, noting that successful policy implementation and design are the product of evaluation and adaption” (assuming “successful policy implementation and design” exist, but I digress).

More specifically, in the co-editors’ Introduction to the Special Issue, Harris and Herrington note that in this special issue they “pose dozens of unanswered questions [see below], not only about the net effects of these policies on measurable student outcomes, but about the numerous, often indirect ways in which [unintended] and less easily observed effects might arise.” This section is of, in my opinion, the most “added value.”

Here are some of their key assertions:

  • “[T]eachers and principals trust classroom observations more than value added.”
  • “Teachers—especially the better ones—want to know what exactly they are doing well and doing poorly. In this respect, value-added measures are unhelpful.”
  • “[D]istrust in value-added measures may be partly due to [or confounded with] frustration with high-stakes testing generally.”
  • “Support for value added also appears stronger among administrators than teachers…But principals are still somewhat skeptical.”
  • “[T]he [pre-VAM] data collection process may unintentionally reduce the validity and credibility of value-added measures.”
  • “[I]t seems likely that support for value added among educators will decrease as the stakes increase.”
  • “[V]alue-added measures suffer from much higher missing data rates than classroom observation[s].”
  • “[T]he timing of value-added measures—that they arrive only once a year and during the middle of the school year when it is hard to adjust teaching assignments—is a real concern among teachers and principals alike.”
  • “[W]e cannot lose sight of the ample evidence against the traditional model [i.e., based on snapshot measures examined once per year as was done for decades past, or pre-VAM].” This does not make VAMs “better,” but with this statement most researchers agree.

Inversely, here are some points or assertions that should cause pause:

  • “The issue is not whether value-added measures are valid but whether they can be used in a way that improves teaching and learning.” I would strongly argue that validity is a pre-condition to use, as we do not want educators using invalid data to even attempt to improve teaching and learning. I’m actually surprised this statement was published, as so scientifically and pragmatically off-based.
  • We know “very little” about how educators “actually respond to policies that use value-added measures.” Clearly, the co-editors are not followers of this blog, other similar outlets (e.g., The Washington Post’s Answer Sheet), or other articles published in the media as well as scholarly journals, about educator use, interpretation, opinion, response, and the like to VAMs. (For example articles published in scholarly journals see, for example, here, here, and here).
  • “[I]n the debate about these policies, perspective has taken over where the evidence trail ends.” Rather, the evidence trail is already quite saturated in many respects, as study after study continues to evidence the same things (e.g., inconsistencies in teacher-level ratings over time, mediocre correlations between VAM and observational output, all of which matter most if high-stakes decisions are to be tied to value-added output).
  • [T]he best available evidence [is just] beginning to emerge on the use of value added.” Perhaps the authors of this piece are correct if focusing only on use, or the lack thereof as we have a good deal of evidence that much use is not happening given issues with transparency, accessibility, comprehensibility, relevance, fairness, and the like.

In the end, and also of high value, the authors of this piece offer others (e.g., graduate students, practitioners, or academics looking take note) some interesting points for future research, given VAMs are likely still to stay, for at least awhile. Some, although not all of their suggested questions for future research are included here:

  • How do educators’ perceptions impact their behavioral responses with and towards VAMs or VAM output? Does administrator skepticism affect how they use these measures?
  • Does the use of VAMs actually lead to more teaching to the test and shallow instruction, aimed more at developing basic skills than at critical thinking and creative
    problem solving?
  • Does the approach further narrow the curriculum and lead to more gaming of the system or, following Campbell’s law, distort the measures in ways that make them less informative?
  • In the process of sanctioning and dismissing low-performing teachers, do value-added-based accountability policies sap the creativity and motivation of our best teachers?
  • Does the use of value-added measures reduce trust and undermine collaboration among educators and principals, thus weakening the organization as a whole?
  • Are these unintended effects made worse by the common misperception that when
    teachers help their colleagues within the school, they could reduce their own value-added measures?
  • Aside from these incentives, does the general orientation toward individual
    performance lead teachers to think less about their colleagues and organizational roles and responsibilities?
  • Are teacher and administrator preparation programs helping to prepare future educators for value-added-based accountability by explaining the measures and their potential uses and misuses?
  • Although not explicitly posed as a question in this piece, but important, what are their (i.e., VAMs and VAM outputs) benefits, intended or otherwise?

Some of these questions are discussed or answered at least in part in the eight articles included in this “Special Issue” also to be highlighted in the next few weeks or so, one by one. Do stay tuned.

*****

Article #1 Reference: Harris, D. N., & Herrington, C. D. (2015). Editors’ introduction: The use of teacher value-added measures in schools: New evidence, unanswered questions, and future prospects. Educational Researcher, 44(2), 71-76. doi:10.3102/0013189X15576142

ShareTweet about this on TwitterShare on Facebook5Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

Leave a Reply

Your email address will not be published. Required fields are marked *