A study released yesterday by Mathematica Policy Research (and sponsored by the U.S. Department of Education) titled “Teachers with High “Value Added” Can Boost Test Scores in Low-Performing Schools” implies that, yet again, value-added estimates are the key statistical indicators we as a nation should be using, above all else, to make pragmatic and policy decisions about America’s public school teachers. In this case, the argument is that value-added estimates can and should be used to make decisions about where to position high value-added teachers so that they might have greater effects, as well as greater potentials to “add” more “value” to student learning and achievement over time.
While most if not all educational researchers agree with the fundamental idea behind this research study — that it is important to explore ways to improve America’s lowest achieving schools by providing students in such schools increased access to better teachers — this study overstates its “value-added” results.
Hence, I have to issue a Consumer Alert! While I don’t recommend reading the whole technical report (at over 200 single-spaced pages), the main issues with this piece, again not in terms of its overall purpose but in terms of its value-added felicitations, follow:
- The high value-added teachers who were selected to participate in this study, and transfer into high-needs schools to teach for two years, were disproportionately National Board Certified Teachers and teachers with more years of teaching experience. The finding that these teachers, selected only because they were high value-added teachers was confounded by the very fact that they were compared to “similar” teachers in the high-needs schools, many of whom were not certified as exemplary teachers and many of whom (20%) were new teachers…as in, entirely new to the teaching profession! While the high value-added teachers who choose to teach in higher needs schools for two years (with $20,000 bonuses to boot) were likely wonderful teachers in their own rights, the same study results would have likely been achieved by simply choosing teachers with more than X years of experience or choosing teachers whose supervisors selected them as “the best.” Hence, this study was not about using “value-added” as the arbiter of all that is good and objective in measuring teacher effects, it was about selecting teachers who were distinctly different than the teachers to whom they were compared and attributing the predictable results back to the “value-added” selections that were made.
- Related, many of the politicians and policymakers who are advancing national and state value-added initiatives and policies forward are continuously using sets of false assumptions about teacher experience, teacher credentials, and how/why these things do not matter to advance their agendas forward. Rather, in this study, it seems that teacher experience and credentials mattered the most. Results from this study, hence, contradict initiatives, for example, to get rid of salary schedules that rely on years of experience and credentials, as value-added scores, as evidenced in this study, do seem to capture these other variables (i.e., experience and credentials) as well.
- Another argument could be made against the millions of dollars in taxpayer generated funds that our politicians are pouring into these initiatives, as well as federally-funded studies like these. For years, we have known that America’s best teachers are disproportionately located in America’s best schools. While this too was one of this study’s “new” findings, this is nowhere “new” to the research literature in this area. This has not just recently become a “growing concern” (see, for example, an article that I wrote in Phi Delta Kappan back in 2007 that helps to explore this issue’s historical roots). In addition, we have about 20 years of research studies examining teacher transfer initiatives like the one studied here, but such initiatives are so costly that they are rarely if ever sustainable, and they typically fizzle out in due time. This is yet another instance in which the U.S. Department of Education took an ahistorical approach and funded a research study with a question to which we already knew the answer: “Transferring teacher talent” works to improve schools but not with real-world education funds. It would be fabulous should significant investments be made in this area, though!
- Interesting to add, as well, is that “[s]pearheading” this study were staff from The New Teacher Project (TNTP). This group authored the now famous The Widget Effect study. This group is also famously known for advancing false notions that teachers matter so much that they can literally wipe out all of the social issues that continue to ail so many of America’s public schools (e.g., poverty). This study’s ahistorical and overstated results, are a bit less surprising taking this into consideration.
- Finally, call me a research purist but another Consumer Alert! should automatically ding whenever “random assignment,” “random sampling,” “a randomized design,” or a like term (e.g., “a multisite randomized experiment as was used here) is used, especially in a study’s marketing materials. In this study, such terms are exploited when in fact “treatment” teachers were selected due to high-value-added scores (that were not calculated consistently across participating districts), only 35 to 65 percent of teachers had enough value-added data to be eligible for selection, teachers were surveyed to determine if they wanted to participate (self-selection bias), teachers were approved to participate by their supervisors, then teachers were interviewed by the new principals for whom they were to work as they were actually for hire. To make this truly random, principals would have had to agree to whatever placements they got and all high-value-added teachers would have had to have been randomly selected and then randomly placed, and given equal probabilities in their placements, regardless of the actors and human agents involved. On the flip side, the “control” teachers to whom the “treatment” (i.e., high-value-added) teachers were to be compared, should have also been randomly selected from a pool of control applicants who should have been “similar” to the other teachers in the control schools. To get at more valid results, the control group teachers should have better represented “the typical teachers” at the control schools (i.e., not brand new teachers entering the field). “[N]ormal hiring practices” should not have been followed if study results were to answer whether expert teachers would have greater effects on student achievement than other teachers in high-needs schools. The research question answered, instead, was whether high-value-added teachers, a statistically significant group of whom were National Board Certified and had about four more years of teaching experience on average, would have greater effects on student achievement than other newly hired teachers, many of whom (20%) were brand new to the profession.