In Ahern v. King, filed on April 16, 2014, the Syracuse Teachers Association (and its President Kevin Ahern, along with a set of teachers employed by the Syracuse School District) sued the state of New York (and its Commissioner of the New York State Department of Education John King, *who is Arne Duncan’s U.S. Secretary of Education Successor, along with others representing the state) regarding its teacher evaluation system.
They alleged that the state’s method of evaluating teachers is unfair to those who teach economically disadvantaged students. More specifically, and as per an article in Education Week to which I referred in a post a couple of weeks ago, they alleged that the state’s value-added model (which is actually a growth model) fails to “fully account for poverty in the student-growth formula used for evaluation, penalizing some teachers.” They also alleged that “the state imposed regulations without public comment.”
Well, the decision on the state’s motion to dismiss is in. The Court rejected the state’s attempt to dismiss this case; hence, the case will move forward. For the full report, see attached the official Decision and Order.
As per plaintiff’s charge that controlling for student demographic variables, within the Decision and Order document it is written that the growth model used by the state was developed, under contract, by the American Institutes for Research (AIR). In AIR’s official report/court exhibit, AIR acknowledged the need for controlling for students’ demographic variables, to moderate potential bias; hence, “if a student’s family was participating in any one of a number of economic assistance programs, that student would have been ‘flagged,’ and his or her growth would have been compared with students who were similarly ‘flagged’…If a student’s economic status was unknown, the student was treated as if he or she was not economically disadvantaged” (p. 7).
Nothing in the document was explicated, however, regarding whether the model output were indeed biased, or rather biased as per the individual plaintiffs charging that their individual value-added scores were biased as such.
What was explicated was that the defendants attempted to dismiss this case given a four-month statute of limitations. Plaintiffs, however, prevailed in that the state never made a “definitive decision” that was “readily ascertainable” to teachers on this matter (i.e., about controlling for students’ demographic factors within the model); hence, the statute of limitations of clock really never started taking time.
As per the Court, the state’s definition of “similar students” to whom other students are to be compared (as mentioned prior) was indeterminate until December of 2013. This was evidenced by the state’s Board of Regents’ June 2013 approval of an “enhanced list of characteristics used to define” such students, and this was evidenced by the state’s December 2013 publication of AIR’s technical report in which their methods were (finally) made public. Hence, the Court found that the plaintiffs’ claims fell within the four-month statute of limitations upon which the state was riding in order to dismiss, because the state had not “properly promulgated” its methodology for measuring student growth at that time (and arguably still).
So ultimately, the Court found that the defendants failed to establish their entitlement to dismiss the specific teachers on the plaintiffs side in this case, given a statute of limitation violation (that is also more fully explained in detail in the Decision and Order). The court denied the state of New York’s motion to dismiss, and this one will move forward in New York.
This is great news for those in New York, especially considering this state is one more than most others in which education “leaders” have attached high-stakes consequences to such teacher evaluation output. As per state law, teacher evaluation scores (as based in large part on value-added or growth in this particular case) can be used as “a significant factor for employment decisions including but not limited to, promotion, retention, tenure determination, termination, and supplemental compensation.”
When are teachers going to take control of the conversation, and make the argument about the fact that high stakes testing results do not give accurate information about student learning or teacher effectiveness? According to statisticians, the test is merely a measure of how well a student performed on a given test on one particular day. The truth is these tests are an invalid measure to begin with. So long as what teachers demand is that the formulas that are used to base decisions driven by test results be tweaked, the argument for good teaching, real education and learning is lost.