Forcing the Fit Using Alternative “Student Growth” Measures

Please follow and like us:

As discussed on this blog prior, when we are talking about teacher effectiveness as defined by the output derived via VAMs, we are talking about the VAMs that still, to date, only impact 30%-40% of all America’s public school teachers. These are the teachers who typically teach mathematics and/or reading/language arts in grades 3-8.

The teachers who are not VAM-eligible are those who typically teach in the primary grades (i.e., grades K-2), teachers in high-schools who teach more specialized subject areas that are often not tested using large-scale tests (e.g., geometry, calculus), and the teachers who teach out of the subject areas typically tested (e.g., social studies, science [although there is a current push to increase testing in science], physical education, art, music, special education, etc.). Sometimes entire campuses of teachers are not VAM-eligible.

So, what are districts to do when they are to follow the letter of the law, and the accountability policies being financially incentivized by the feds, and then the states (e.g., via Race to the Top and the NCLB waivers)? A new report released by the Institute of Education Sciences (IES), the research arm of the US Department of Education, and produced by Mathematica Inc. (via a contract with the IES) explains what states are up to in order to comply. You can find the summary and full report titled “Alternative student growth measures for teacher evaluation: Profiles of early-adopting districtshere.

What investigators found is that these “early adopters” are using end-of course exams, commercially available tests (e.g., the Galileo assessment system), and Student Learning Objectives (SLOs), which are teacher-developed and administrator-approved to hold teachers accountable for their students’ growth. Although an SLO is about as subjective as it gets in the company of the seemingly objective, more rigorous, and vastly superior VAMs. In addition, the districts sampled are also adopting the same VAM methodologies to keep all analytical approaches (except for the SLOs) the same, almost regardless of the measures used. If the measures exist, or are to be adopted, might as well “take advantage of them” to evaluate value-added because the assessments can be used (and exploited) to measure the value-added of more and more teachers. What?

This is the classic case of what we call “junk science.” We cannot just take whatever tests, regardless of to what standards they are aligned, or not, and run the data through the same value-added calculator in the name of accountability consistency.

Research already tells us that when using different tests, even on the same students of the same teachers at the same time, but using the same VAMs, gives us very, very different results (see, for example, the Papay article here).

Do the feds not see that forcing states to force the fit is completely wrong-headed and simply wrong? They are the ones who funded this study, but apparently see nothing wrong with the absurdity of the study’s results. Rather, they suggest, results should be used to “provide key pieces of information about the [sampled] districts’ experiences” so that results “can be used by other states and districts to decide whether and how to implement alternative assessment-based value-added models or SLOs.”

Force the fit, they say, regardless of the research or really any inkling of commonsense. Perhaps this will help to further line the pockets of more corporate reformers eager to offer, now, not only their VAM services but also even more tests, end-of-course, and SLO systems.

Way to lead the nation!

2 thoughts on “Forcing the Fit Using Alternative “Student Growth” Measures

  1. They do understand. They know exactly what they are doing. They are creating an avenue to shift money to charter executives, owners, and investors at the expense our students.

Leave a Reply

Your email address will not be published. Required fields are marked *