In one of my older posts (here), I wrote about the Los Angeles Times and its controversial move to solicit Los Angeles Unified School District (LAUSD) students’ test scores via an open-records request, calculate LAUSD teachers’ value-added scores themselves, and then publish thousands of LAUSD teachers’ value-added scores along with their “effectiveness” classifications (e.g., least effective, less effective, average, more effective, and most effective) on their Los Angeles Teacher Ratings website. They did this, repeatedly, since 2010, and they have done this all the while despite the major research-based issues surrounding teachers’ value-added estimates (that hopefully followers of this blog know at least somewhat well). This is also of professional frustration for me since the authors of the initial articles and the creators of the searchable website (Jason Felch and Jason Strong) contacted me back in 2011 regarding whether what they were doing was appropriate, valid, and fair. Despite my strong warnings against it, Felch and Song thanked me for my time and moved forward.
Just yesterday, the National Education Policy Center (NEPC) at the University of Colorado – Boulder, published a Newsletter in which authors answer the following question, as taken from the Newsletter’s title: “Whatever Happened with the Los Angeles Times’ Decision to Publish Teachers’ Value-Added Scores?” Here is what they found, by summarizing one article and two studies on the topic, although you can also certainly read the full report here.
- Publishing the scores meant already high-achieving students were assigned to the classrooms of higher-rated teachers the next year, [found a study in the peer-reviewed Economics of Education Review]. That could be because affluent or well-connected parents were able to pull strings to get their kids assigned to those top teachers, or because those teachers pushed to teach the highest-scoring students. In other words, the academically rich got even richer — an unintended consequence of what could be considered a journalistic experiment in school reform.
-
The decision to publish the scores led to: (1) A temporary increase in teacher turnover; (2) Improvementsin value-added scores; and (3) No impact on local housing prices.
-
The Los Angeles Times’ analysis erroneously concluded that there was no relationship between value-added scores and levels of teacher education and experience.
-
It failed to account for the fact that teachers are non-randomly assigned to classes in ways that benefit some and disadvantage others.
-
It generated results that changed when Briggs and Domingue tweaked the underlying statistical model [i.e., yielding different value-estimates and classifications for the same teachers].
-
It produced “a significant number of false positives (teachers rated as effective who are really average), and false negatives (teachers rated as ineffective who are really average).”
After the Los Angeles Times’ used a different approach in 2011, Catherine Durso found:
- Class composition varied so much that comparisons of value-added scores of two teachers were only valid if both teachers are assigned students with similar characteristics.
- Annual fluctuations in results were so large that they lead to widely varying conclusions from one year to the next for the same teacher.
- There was strong evidence that results were often due to the teaching environment, not just the teacher.
- Some teachers’ scores were based on very little data.
In sum, while “[t]he debate over publicizing value-added scores, so fierce in 2010, has since died down to
a dull roar,” more states (e.g., like in New York and Virginia), organizations (e.g., like Matt Barnum’s Chalbeat), and news outlets (e.g., the Los Angeles Times has apparently discontinued this practice, although their website is still live) need to take a stand against or prohibit the publications of individual teachers’ value-added results from hereon out. As I noted to Jason Felch and Jason Strong a long time ago, this IS simply bad practice.
https://www.ecs.org/wp-content/uploads/Teacher_Evaluations.pdf
This report shows how states are addressing teacher evaluation under ESSA. I see that many are still using a corrupted version of “growth” measures tied to improvement or to achievement, with observations and surveys added to the mix.
Ohio and Tennessee are still using VAM and as late as 2015 a consensus document offered by the Carneigie Foundation for the Advancement of Teaching offered a favorable view of VAM. Anthony Bryk and the Gates Foundation are current promoters of Improvement Science in Education…data-driven “continuous improvement” schemes seeking a reduction in variability of outcomes from education. As usual, math is the preferred focus of rapid cycle efficacy studies. All of this is troubling if you work in the arts and humanities.