One of the best parts about this blog, in my humble opinion, is hearing from folks in the field who are living out the realities and consequences of value-added as being implemented across America’s public schools. On that note, if you are a practitioner and ever feel like writing me directly, about your experiences (good or bad) with value-added specifically, please do so.
One Assistant Principal from Tennessee recently wrote me an email, which I asked for his permission to share with all of you thereafter, about the extent to which value-added scores in his state, as derived via the Education Value-Added Assessment System (EVAAS) used throughout Tennessee (and some other states and many other districts), are biased not only by the types of students non-randomly assigned to teachers’ classrooms but also by the types of subject areas taught in certain grade levels.
I have a keen interest in the EVAAS, specifically, as this is the proprietary model I have studied and researched now for almost 10 years (see related studies in the readings section of this blog if you’re so inclined). This is also the value-added system that, let’s say, “inspired” me to devote my scholarly efforts to this topic.
He wrote:
“I wanted to share some insights and observations I’ve noticed about TVAAS [the EVAAS is called the Tennessee Value-Added Assessment System in Tennessee] through the years. I’m sure your analysis and deconstruction of our state’s VAM will be much more thorough and mathematically sound than anything I can bring to light [see another two posts forthcoming including our analyses]. That being the case, when there are things that the lay person can notice, I feel them worth sharing [and they are].
Currently I am working as an Assistant Principal in Middle Tennessee. I spent nine years teaching middle school where I was evaluated based on my TVAAS scores. I usually did quite well on my TVAAS scores but over the years noticed troubling repetitive features of the scores at my school.
What disturbs me most, and I have never seen this addressed, is how subject and grade specific high and low value added scores correlate to specific subjects and grade levels. For example, in Tennessee 4th and 8th grade ELA [English/language arts] scores consistently have high value added marks while 6th, and 7th grade ELA do poorly. The 5th grade scores are more of a mixed bag. To illustrate this, go to the TVAAS website and look at Shelby’s [Memphis] or Davidson County’s [Nashville] or Williamson’s (Nashville’s most affluent bedroom community) TVAAS scores (Value Added Summary in District Reports).
To me this clearly illustrates that TVAAS is correlated much more to subject and grade level than teacher effect. Were these the results of a few schools you might assume it was a pedagogical issue. When you see this consistent pattern across thirty and forty schools it causes me [now as an Assistant Principal] concern to evaluate teachers with this tool.
The other feature of TVAAS that concerns me is are the violent swings of high school value added math scores compared [to] the relatively subtle gains and losses of ELA. Clicking on any value added summary for any EOC [End of Course] test on the state website should bring up the EOC scores for all subjects. What I see are math scores that frequently exceed (+/-) 20. ELA TVAAS scores seldom exceed (+/-) 6.
Why would a metric designed to measure [the] teacher effect be so stable in ELA and fluctuate so much in math?”
This is precisely one, of the many questions, administrators, policymakers, and members of the public whose monies are going to support this and other systems should be asking. If the companies cannot produce research clearly evidencing that this type of bias is not occurring, they should not get the contracts nor the substantial monies that accompany them.
See our analyses as per this Assistant Principal’s expressed concerns forthcoming in the next two VAMboozled! posts.