When I first began researching VAMs, and more specifically the Education Value-Added Assessment System (EVAAS) developed by William Sanders in the state of Tennessee (the state we now know as VAM’s “ground zero”), I came across a fabulous online debate (before blogs like this and other social networking sources were really prevalent), that was all about this same system, that was then called the TVAAS (the Tennessee Value-Added Assessment System).
The discussants questioning the TVAAS? Renowned scholars including: Gene Glass — best known for his statistical work and for his development of “meta-analysis;” Michael Scriven — best known for his scholarly work in evaluation; Harvey Goldstein — best known for his knowledge of statistical modeling and their use on tests; Sherman Dorn — best known for his work on educational reforms and how we problematize our schools; Gregory Camilli — best known for his studies on the effects of educational programs and policies; and a few others with whom I am less familiar. The discussants defending their TVAAS? William Sanders — the TVAAS/EVAAS developer; Sandra P. Horn — Sanders’s colleague; and an unknown discussant representing the “TVAAS (Tennessee Value-Added Assessment System.”
While this was what could now easily be called the first value-added “smack-down” (I am honored to say I was part of the second, and the first so titled), it served as a foundational source to the first study I ever published on the topic of VAMs (a study published in 2008 in the highly esteemed Educational Researcher and titled, Methodological concerns about the Education Value-Added Assessment System [EVAAS]). I was just reminded, today, about this online debate (or debate made available online) that, although it took place in 1995, is still one of if not the best in-depth debates surrounding, and thorough analyses of VAM that has ever been done.
While it is long, it is certainly worth a read and review, as readers too should see in this debate so many issues still relevant and currently problematic, now 20 years later. You can see just how far we’ve really come in the last, now 20 years since this VAM nonsense really got started, as the issues debated here are still, for the most part, the issues that continue to go unresolved…
One of my favorite highlight’s, I’ve pasted here if I have not yet enticed you enough…it comes from a post written by Gene Glass on Friday, October 28th, 1994. Gene writes:
“Dear Professor Sanders:
I like statistics; I made the better part of my living off of it for many years. But could we set it aside for just a minute while you answer a question or two for me?
I gather that [the TVAAS] is a means of measuring what it is that a particular teacher contributes to the basic skills learning of a class of students. Let me stipulate for the moment that for your sake all of the purely statistical considerations attendant to partialling out previous contributions of other teachers’ “additions of value” to this year’s teachers’ addition of value have been resolved perfectly–above reproach; no statistician who understands mixed models, covariance adjustment, and the like would question them. Let’s just pretend that this is true.
Now imagine–and it should be no strain on one’s imagination to do so–that we have Teacher A and Teacher B and each has had the pretest (September) achievement status of their students impeccably measured. But A has a class with average IQ of 115 and B has a class of average IQ 90. Let’s suppose that A and B teach to the very limit of their abilities all year long and that in the eyes of God, they are equally talented teachers. We would surely expect that A’s students will achieve much more on the posttest (June) than B’s. Anyone would assume so; indeed, we would be shocked if it were not so.”
Question: Does your system of measuring and adjusting and assigning numbers to teachers take these circumstances into account so that A and B emerge with equal “added value” ratings?”
Sandra P. Horn’s answer? “Yes.”
Horn, had to say yes in response to Gene’s question, however, or the method would have even then been exposed as entirely invalid. Students with higher levels of intelligence undoubtedly learn more than students with lower levels of intelligence, and if two classes differ greatly on IQ, one will make greater progress during the year. This growth can have nothing to do with the teacher, and this can be (and still is) observed, despite the sophisticated statistical controls meant to control for students’ prior achievements, and in this case their aptitudes.