Thus far, there have been four total books written about value-added models (VAMs) in education: one (2005) scholarly, edited book that was published prior to our heightened policy interest in VAMs; one (2012) that is less scholarly but more of a field guide on how to use VAM-based data; my recent (2014) scholarly book; and another recent (2011) scholarly book written by Doug Harris. Doug is an Associate Professor of Economics at Tulane in Louisiana. He is also, as I’ve written prior, “a ‘cautious’ but quite active proponent of VAMs.”
There is quite an interesting history surrounding these latter two books, given Harris and I have quite different views on VAMs and their potentials in education. To read more about our differing opinions you can read a review of Harris’s book I wrote for Teachers College Record, and another review a former doctoral student and I wrote for Education Review, to which he responded in his (and his book’s) defense, to which we also responded (with a “rebuttal to a rebuttal, as you will“). What was ultimately titled a “Value-Added Smackdown” in a blog post featured in Education Week, let’s just say, got a little out of hand, with the “smackdown” ending up focusing almost solely around our claim that Harris believed, and we disagreed, with the notion that “value-added [was and still is] good enough to be used for [purposes of] educational accountability.” We asserted then, and I continue to assert now, that “value-added is not good enough to be attaching any sort of consequences much less any such decisions to its output. Value-added may not even be good enough even at the most basic, pragmatic level.”
Harris continues to disagree…
Just this month he released a technical report to his state’s school board (i.e., the Louisiana Board of Elementary and Secondary Education (BESE)), in which he (unfortunately) evidenced that he has not (yet) changed his scholarly stripes….even given the most recent research about the increasingly apparent, methodological, statistical, and pragmatic limitations (see, for example, here, here, here, here, and here), and the recent position statement released by the American Statistical Association underscoring the key points being evidenced across (most) educational research studies. See also the 24 articles published about VAMs in all American Educational Research Association (AERA) Journals here, along with open-access links to the actual articles.
In this report Harris makes “Recommendations to Improve the Louisiana System of
Accountability for Teachers, Leaders, Schools, and Districts,” the main one being that the state focus “more on student learning or growth—[by] specifically, calculating the predicted test scores and rewarding schools based on how well students do compared with those predictions.” The recommendations in more detail, in support, and that also pertain to our interests here include the following five (of six recommendations total):
1. “Focus more on student growth [i.e., value-added] in order to better measure the performance of schools.” Not that there is any research evidence in support, but “The state should [also] aim for a 50-‐50 split between growth and achievement levels [i.e., not based on value-added].” Doing this at the school accountability level “would also improve alignment with teacher accountability, which includes student growth [i.e., teacher-level value-added] as 50% of the evaluation.”
2. “Reduce uneven incentives and avoid “incentive cliffs” by increasing [school performance score] points more gradually as students move to higher performance levels,” notwithstanding the fact that no research to date has evidenced that such incentives incentivize much of anything intended, at least in education. Regardless, and despite the research, “Giving more weight to achievement growth [will help to create] more even [emphasis added] incentives (see Recommendation #1).”
3. Related, “Create a larger number of school letter grades [to] create incentives for all schools to improve,” by adding +/- extensions to the school letter grades, because “[i]f there were more categories, the next [school letter grade] level would always be within reach….provide. This way all schools will have an incentive to improve, whereas currently only those who are at the high end of the B-‐D categories have much incentive.” If only the real world of education worked as informed by simple idioms, like those simplifying the theories supporting incentives (e.g., the carrot just in front of the reach of the mule will make the mule draw the cart harder).
5. “Eliminate the first over-ride provision in the teacher accountability system, which automatically places teachers who are “Ineffective” on either measure in the “Ineffective” performance category.” With this recommendation, I fully agree, as Louisiana is one of the most extreme states when it comes to attaching consequences to problematic data, although I don’t think Harris would agree with my “problematic” classification. But this would mean that “teachers who appear highly effective on one measure could not end up in the “Ineffective” category,” which for this state would certainly be a step in the right direction. Although Harris’s assertion that doing this would also help prevent principals from saving truly ineffective teachers (e.g., by countering teachers’ value-added scores with artificially inflated or allegedly fake observational scores), on behalf of principals as professionals, I find insulting.
6. “Commission a full-scale third party evaluation of the entire accountability system focused on educator responses and student outcomes.” With this recommendation, I also fully agree under certain conditions: (1) the external evaluator is indeed external to the system and has no conflicts of interest, including financial (even prior to payment for the external review), (2) that which the external evaluator is to investigate is informed by the research in terms of the framing of the questions that need to be asked, (3) as also recommended by Harris, that perspectives of those involved (e.g., principals and teachers) are included in the evaluation design, and (4) all parties formally agree to releasing all data regardless of what (positive or negative) the external evaluator might evidence and find.
Harris’s additional details and “other, more modest recommendations” include the following:
- Keep “value-‐added out of the principal [evaluation] measure,” but the state should consider calculating principal value-‐added measures and issuing reports that describe patterns of variation (e.g., variation in performance overall [in] certain kinds of schools) both for the state as a whole and specific districts.” This reminds me of the time that value-added measures for teachers were to be used only for descriptive purposes. While noble as a recommendation, we know from history what policymakers can do once the data are made available.
- “Additional Teacher Accountability Recommendations” start on page 11 of this report, although all of these (unfortunately, again) focus on value-added model twists and tweaks (e.g., how to adjust for ceiling effects for schools and teachers with disproportionate numbers of gifted/high-achieving students, how to watch and account for bias) to make the teacher value-added model even better.
Harris concludes that “With these changes, Louisiana would have one of the best accountability systems in the country. Rather than weakening accountability, these recommendations [would] make accountability smarter and make it more likely to improve students’ academic performance.” Following these recommendations would “make the state a national leader.” While Harris cites 20 years of failed attempts in Louisiana and across all states across the country as the reason America’s public education system has not improved its public school students’ academic performance, I’d argue it’s more like 40 years of failed attempts because Harris’s (and so many others’) accountability-bent logic is seriously flawed.
A rebuttal to a rebuttal is called a serrebuttal.