Harvard Economist Deming on VAM-Based Bias

Please follow and like us:

David Deming – an Associate Professor of Education and Economics at Harvard – just published in the esteemed American Economic Review an article about VAM-based bias, in this case when VAMs are used to measure school versus teacher level effects.

Deming appropriately situated his study within the prior works on this topic, including the key works of Thomas Kane (Education and Economics at Harvard) and Raj Chetty (Economics at Harvard). These two, most notably, continue to advance assertions that using students’ prior test scores and other covariates (i.e., to statistically control for students’ demographic/background factors) minimizes VAM-based bias to negligible levels. Deming also situated his study given the notable works of Jesse Rothstein (Public Policy and Economics at the University of California, Berkeley) who continues to evidence VAM-based bias really does exist. The research of these three key players, along with their scholarly disagreements, have also been highlighted in prior posts about VAM-based bias on this blog (see, for example, here and here).

In this study to test for bias, though, Deming used data from Charlotte-Mecklenburg, North Carolina, given a data set derived from a district in which there was quasi-random assignment of students to schools (given a school choice initiative). With these data, Deming tested whether VAM-biased bias was evident across a variety of common VAM approaches, from the least sophisticated VAM (e.g., one year of prior test scores and no other covariates) to the most (e.g., two or more years of prior test score data plus various covariates).

Overall, Deming failed to reject the hypothesis that school-level effects as measured using VAMs are unbiased, almost regardless of the VAM being used. In more straightforward terms, Deming found that school effects as measured using VAMs were rarely if ever biased when compared to his randomized samples. Hence, this work falls inline with prior works countering that bias really does exist (Note: this is a correction from the prior post).

There are still, however, at least three reasons that could lead to bias in either direction (I.e., positive, in favor of school effects or negative, underestimating school effects):

  • VAMs may be biased due to the non-random sorting of students into schools (and classrooms) “on unobserved determinants of achievement” (see also the work of Rothstein, here and here).
  • If “true” school effects vary over time (independent of error), then test-based forecasts based on prior cohorts’ test scores (as is common when measuring the difference between predictions and “actual” growth, when calculating value-added) may be poor predictors of future effectiveness.
  • When students self-select into schools, the impact of attending a school may be different for students who self-select in than for students who do not. The same thing likely holds true for classroom assignment practices, although that is my extrapolation, not Deming’s.

In addition, and in Deming’s overall conclusions that also pertain here, “many other important outcomes of schooling are not measured here. Schools and teachers [who] are good at increasing student achievement may or may not be effective along other important dimensions” (see also here).

For all of these reasons, “we should be cautious before moving toward policies that hold schools accountable for improving their ‘value added” given bias.

1 thought on “Harvard Economist Deming on VAM-Based Bias

  1. “Overall, Deming failed to reject the hypothesis that school-level effects as measured using VAMs are unbiased, almost regardless of the VAM being used. In more straightforward terms, Deming found that school effects as measured using VAMs are often-to-always biased.”

    Actually, your interpretation of Deming’s result is exactly opposite the correct interpretation of his result.

    Deming failed to reject the hypothesis that school-level effects as measured using VAMs are unbiased. In other words, as far as Deming can tell, school-level effects as measured using VAMs *are* unbiased.

    The contribution of this paper to the literature on VAMs is two-fold:

    First, Deming’s subjects were randomized between schools. This is important, because it is expected that hidden factors which might affect student performance would balance out during randomization.

    Second, Deming shows that school-level value-added scores are predictive of how students will perform when they transfer into that school. In other words, not only does value-added work for teachers, but value-added also works for schools.

Leave a Reply

Your email address will not be published. Required fields are marked *