A VAMboozled! follower – Terry Ward (El Paso, TX), a retired writer and statistician married to a veteran and also current Title I music teacher – sent this to me via email after reading a recent article in The New York Times. The article, released around Thanksgiving, was about how the “U.S. Wants Teacher Training Programs to Track How [College of Education] Graduates’ Students Perform.” This, of course, using in part value-added models (VAMs).
I thought it important to share with you all, what Terry wrote in response, with his permission:
It has recently been proposed that colleges of education be rated and evaluated on the basis of how the students of their graduates perform on standardized tests. As, they say, the devil, however, is in the details. Let’s look at how this might work in the case of my wife — a teacher of some 40+ years experience who graduated from a college of education over 40 years ago.
Problem 1: Standardized student scores are problematic enough for individual teachers. Remember, the American Statistical Association (ASA) estimates that the teacher influence explains somewhere between one to fourteen percent of the score variation in test scores. The college is even further removed from the student taking the test, so the question becomes how small the college’s contribution must be?
Problem 2: What is the decay function for college influence? Simply put, my wife graduated with her initial teaching degree over forty years ago. Any influence of the college upon her teaching is, therefore, minimal to non-existent. One assumes such an influence fades over time, so what is the shape for this decay and how will the US Department of Education (DOE) evaluators measure it? What is the half-life of educational influence?
Problem 3: If we assume that the easiest year to measure college influence is the first year of teaching, how might the DOE extract college of education factors from the basic issues of first-year inexperience in real-world teaching?
Problem 4: What happens when additional schooling is factored in? My wife has a Master’s degree. Does that influence her teaching and how is the DOE evaluation to split between her very distant B.A. degree (40 years ago) and the slightly more recent Master’s degree (30 years ago)?
Problem 5: What of non-degree but still credentialed education? My wife is also a graduate of a National Writing Project summer writing camp with graduate hours in writing and writing pedagogy. Who is to get credit if this has (also) improved her students’ test scores? And, how is the DOE to determine or tell who is to get their appropriate credit?
Problem 6: When a teacher changes schools (perhaps to teach impoverished youth), her student scores are likely to change dramatically. Does the DOE propose to re-evaluate such a teacher’s college experience and downgrade them as well given where a teacher decides to teach?
I am sure the reader can come up with other absurd problems with the DOE proposal. I am simply reminded of the old saying that “for every complex problem, there exists a good sounding simple solution that is completely wrong!” This certainly seems to be the case here.