Moshe Adler on Chetty et al.

ShareTweet about this on TwitterShare on Facebook0Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

Recall the study at focus of now many posts on this blog? Written by Raj Chetty and two of his colleagues at Harvard, cited first in President Obama’ 2012 State of the Union address when Obama said, “We know a good teacher can increase the lifetime income of a classroom by over $250,000,” and more recently the focus of much attention when the judge in Vergara v. California cited Chetty’s testimony and the aforementioned team’s research as evidence that “a single year in a classroom with a grossly ineffective teacher costs students $1.4 million in lifetime earnings per classroom?”

Well, one of this study’s biggest critics, and most important critics as an expert in economics himself, has written a response to all of this. More specifically, Moshe Adler – economist affiliated with Columbia University and Empire State College, State University of New York and author of the 2010 book, Economics for the Rest of Us: Debunking the Science That Makes Life Dismal – first (in April 2014) wrote a critique of Chetty et al.’s studies for the National Education Policy Center (NEPC) during which he also compared this study from its first release in 2011 (cited by Obama) to its current 2014 version (cited by the judge in Vergara v. California) that can be found here, after which Chetty et al. wrote a response (in May 2014) that can be found at the bottom of the page here, after which Adler released a final response (in June 2014) that can also be found at the very bottom of the page here.

For those of you interested in reading all three pieces, all three are really quite critical to understanding Chetty et al.’s methods, as well as their methods’ shortcomings; their assumptions, as well as their feasibility in context; their results, as unfortunately overstated as they are; and the like. For those of you who want just the highlights, though (as also referenced in the press release just released here), here are the nine key points taken from now the two studies. These are the nine concerns made and established by Moshe in the first round, to which Chetty et al. responded, but seemingly still stand as solid criticisms.

As per Chetty et al.’s study #1:

  1. Value-added scores in this report and in general are unstable from year to year and from test to test, but the report ignores this instability.
  2. The report inflates the effect of teacher value-added by assuming that a child’s test scores can be improved endlessly [over time].
  3. The procedure that the report develops for calculating teacher value-added varies greatly between subjects within school levels (math or English in elementary/high school) and between schools within subjects (elementary or middle school math/English), indicating that the calculated teacher value-added may be random.
  4. The commonly used method for determining how well a model predicts results is through correlations and illustrated through scatterplot graphs. The report does not [however] present either the calculations or graphs and instead invents its own novel graph to show that the measurements of value-added produce good predictions of future teacher performance. But this is misleading. Notwithstanding the graph, it is possible that the quality of predictions in the report was poor.

As per Chetty et al.’s study #2:

  1. An earlier version of the report found that an increase in teacher value-added has no effect on income at age 30, but this result is not mentioned in this revised version. Instead, the authors state that they did not have a sufficiently large sample to investigate the relationship between teacher value-added and income at any age after 28, but this claim is untrue. They had 220,000 observations (p. 15), which is a more than sufficiently large sample for their analysis.
  2. The method used to calculate the 1.34% increase [that a 1 standard deviation increase in teacher VA raises wages at age 28 by 1.34% (p. 37)] is misleading, since observations with no reported income were included in the analysis, while high earners were excluded. If done properly, it is possible that the effect of teacher value-added is to decrease, not increase, income at age 28 (or 30).
  3. The increase in annual income at age 28 due to having a higher quality teacher “improved” dramatically from the first version of the report ($182 per year, report of December, 2011) to the next ($286 per year, report of September, 2013). Because the data sets are not identical, a slight discrepancy between estimates is to be expected. But since the discrepancy is so large, it suggests that the correlation between teacher value-added and income later in life is random.
  4. In order to achieve its estimate of a $39,000 income gain per student, the report makes the assumption that the 1.34% increase in income at age 28 will be repeated year after year. Because no increase in income was detected at age 30, [however] and because 29.6% of the observations consisted of non-filers, this assumption is unjustified.
  5. The effect of teacher value-added on test scores fades out rapidly. The report deals with this problem by citing two studies that it claims buttress the validity of its own results. This claim is [also] both wrong and misleading.
ShareTweet about this on TwitterShare on Facebook0Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

Leave a Reply

Your email address will not be published. Required fields are marked *