About two months ago, I posted (1) a critique of a highly publicized Mathematica Policy Research study released to the media about the vastly overstated “value” of value-added measures, and (2) another critique of another study released to the media by the National Bureau of Economic Research (NBER). This one, like the other, was not peer-reviewed, or even internally reviewed, yet it was released despite its major issues (e.g., overstated findings about VAMs based on a sample for which only 17% of teachers actually had value-added data).
Again, neither study went through a peer review process, both were wrought with methodological and conceptual issues that did not warrant study findings, and both, regardless, were released to the media for wide dissemination.
Yet again, VAM enthusiasts are attempting to VAMboozle policymakers and the general public with another faulty study, again released by the National Bureau of Economic Research (NBER). But, in an unprecedented move, this time NBER has released the same, highly flawed study three times, even though the first study first released in 2011 still has not made it through peer-review to official publication and it has, accordingly, not proved itself as anything more than a technical report with major methodological issues.
In the first study (2011) Raj Chetty (Economics Professor at Harvard), John Friedman (Assistant Professor of Public Policy at Harvard), and Jonah Rockoff (Associate Professor of Finance and Economics at Harvard) conducted value-added analyses on a massive data set and (over-simplistically) presented (highly-questionable) evidence that favored teachers’ long-lasting, enduring, and in some cases miraculous effects. While some of the findings would have been very welcomed to the profession, had they indeed been true (e.g., high value-added teachers substantively affect students incomes in their adult years), the study’s authors way-overstated their findings, and they did not consider alternative hypotheses in terms of what other factors besides teachers might have caused the outcomes they observed (e.g., those things that happen outside of schools).
Accordingly, and more than appropriately, this study has only been critiqued since, in subsequent attempts to undo what should not have been done in the first place (thanks to both the media and the study’s authors given the exaggerated spin they spun given their results). See, for example, one peer-reviewed critique here, two others conducted by well-known education scholars (i.e., Bruce Baker [Education Professor at Rutgers] and Dale Ballou [Associate Professor of Education at Vanderbilt)) here and here, and another released by the Institute of Education Sciences’ What Works Clearinghouse here.
Maybe in response to their critics, maybe to drive the false findings into more malformed policies, maybe because Chetty (the study’s lead author) just received the John Bates Clark Medal awarded by the American Economic Association, or maybe to have the last word, NBER just released the same exact paper in two more installments. See the second and third releases, positioned as Part I and Part II, to see that they are exactly the same but being promulgated, yet again. While “they” acknowledge that they have done this on the first page of each of the two, it is pretty unethical to go the second round given all of the criticism, the positive and negative press this “working paper” received after its original release(s), and given the study has still not made it through to print in a peer-reviewed journal.
*Thanks to Sarah Polasky for helping with this post.
It’s basic reform education. If you keep assessing students, they get smarter. If you keep releasing a study, it gets righter.
Harvard is only one of a dozen high profile institutions that has become the source of propaganda about K-12 education and teacher performance as measured by scores on standardized tests.
The parallel for the Chetty propaganda is the non-peer reviewed Measures of Effective Teaching study, funded at about $64 million by the Bill and Melinda Gates Foundation and directed by Thomas Kane, Professor of Economics and Education at the Harvard Graduate School of Education and Director of Harvard’s Center for Education Policy Research. Kane also serves as a deputy director of U.S. education for the Bill & Melinda Gates Foundation. Douglas Staiger, Professor of Economics at Dartmouth, was also a lead researcher for the MET project. The Gates’ foundation also “bought” many of the Common Core State Standards initiatives and it funds multi-state projects where teachers who produce above average gains in test scores are defined as “effective.”
The Gates-funded studies of various measures of effective teaching, known as the MET project did not meet the minimal threshold for publication in any research journal, but Kane was invited to report on his work in Congress. As in the case of the Chetty study, publicity for the MET project overshadowed informed criticism of the premises and results of these studies. See Rothstein, J. & Mathis, W. J. (2013). Have we identi-fied effective teachers? Culminating findings from the Measures of Effective Teaching project. (Re-view). Boulder, CO: National Education Policy Cen-ter. Retrieved from http://nepc.colorado.edu/thinktank/review-MET-final-2013.