VAM and Observational (Co)Relationships, Again

Please follow and like us:

In the peer-reviewed, open-access journal, on which I serve as an Associate Editor* — Education Policy Analysis Archives – a VAM related article was recently published titled, “Sorting out the signal: Do multiple measures of teachers’ effectiveness provide consistent
information to teachers and principals?

In this article, authors Katharine Strunk (Associate Professor at USC), Tracey Weinstein (StudentsFirst), and Reino Makkonen (WestEd Senior Policy Associate) examine the relationships between VAMs and observational scores from Los Angeles Unified School District’s (LAUSD). If the purpose of this study sound familiar, it should as actually a good set of researchers have also set out to explore the same correlations in the past, of course using different data. See VAMboozled! posts about such studies, for example, here, here, and here.

In this study researchers “find moderate [positive] correlations between value-added and observation-based measures, indicating that teachers will receive similar but not entirely consistent signals from each performance measure.” The specific correlations they observed range from r = 0.18 in mathematics to r = 0.24 (in English/language arts [ELA]) which to most others classifying such correlation coefficients, these would be considered negligible to small, respectively, and at best.

Again, similar “negligible” and “small” correlation coefficients have really been found time and time again, consistently making these types of correlation coefficients the most often observed, and hence most supportive of the assertion that VAMs and observational scores are not nearly as highly correlated as they should be… IF they are both in fact effectively measuring at least some of the same thing: teacher effectiveness.

While researchers in this article spin these correlations differently than many if not most would, writing, for example, that the low to moderate [see comments about this classification above] sized correlations they observed “means that, while the two measures give the same general signal about effectiveness on average, they may also provide teachers and administrators with unique information about their levels of effectiveness;” others might also replace their term “unique” with a more appropriate adjective like “uncertain,” whereas these correlations might, rather, “provide teachers and administrators with [more uncertain] information about [teachers’] levels of effectiveness” than what might be expected.

That, too, might be a bit more reasonable of a statement. As a colleague wrote in an email to me about this article, “they think they have something–but little was found.” I could not agree more. Hence, I suppose it depends on what side each VAM researcher/author stands on this particular issue; although, the size of the correlation coefficients are indeed consistent in terms of their negligible to small ranges across studies, which was also found here (but not titled as such).

Hence, again, it is really how researchers’ define and interpret these correlations, literally for better or worst, that varies. NOT the sizes of the correlations.

The authors ultimately conclude that “[o]verall, unadjusted observation-based measures and VAMs provide teachers with a modestly [added emphasis for word choice again] consistent, but not identical, signal of effectiveness.” While really nobody is looking for identical or perfect (co)relationship here, again, the correlations that are observed are for sure far from modest or pragmatically useful.

*Full disclosure: I served as the editor on this piece, managing the peer review and revision process through to publication. Here, I comment on this piece not as associate editor, but a consumer of this research now that this manuscript made it through the blind review process to publication.

Leave a Reply

Your email address will not be published. Required fields are marked *