More on the VAM (Ab)Use in Florida

In my most recent post, about it being “Time to Organize in Florida” (see here), I wrote about how in Florida teachers were (two or so weeks after the start of the school year) being removed from teaching in Florida schools if their state-calculated, teacher-level VAM scores deemed them as teachers who “needed improvement” or were “unsatisfactory.” Yes – they were being removed from teaching in low performing schools IF their VAM scores, and VAM scores alone, deemed them as not adding value.

A reporter from the Tampa Bay Times with whom I spoke on this story just published his article all about this, titled “Florida’s ‘VAM Score’ for Rating Teachers is Still Around, and Still Hated” (see the full article here, and see the article’s full reference below). This piece captures the situation in Florida better than my prior post; hence, please give it a read.

Again, click here to read.

Full citation: Solochek, J. S. (2019). Florida’s ‘VAM score’ for rating teachers is still around, and still hated. Tampa Bay Times. Retrieved from

New (Unvetted) Research about Washington DC’s Teacher Evaluation Reforms

In November of 2013, I published a blog post about a “working paper” released by the National Bureau of Economic Research (NBER) and written by authors Thomas Dee – Economics and Educational Policy Professor at Stanford, and James Wyckoff – Economics and Educational Policy Professor at the University of Virginia. In the study titled “Incentives, Selection, and Teacher Performance: Evidence from IMPACT,” Dee and Wyckoff (2013) analyzed the controversial IMPACT educator evaluation system that was put into place in Washington DC Public Schools (DCPS) under the then Chancellor, Michelle Rhee. In this paper, Dee and Wyckoff (2013) presented what they termed to be “novel evidence” to suggest that the “uniquely high-powered incentives” linked to “teacher performance” via DC’s IMPACT initiative worked to improve the performance of high-performing teachers, and that dismissal threats worked to increase the voluntary attrition of low-performing teachers, as well as improve the performance of the students of the teachers who replaced them.

I critiqued this study in full (see both short and long versions of this critique here), and ultimately asserted that the study had “fatal flaws” which compromised the (exaggerated) claims Dee and Wyckoff (2013) advanced. These flaws included but were not limited to that only 17% of the teachers included in this study (i.e., teachers of reading and mathematics in grades 4 through 8) were actually evaluated under the value-added component of the IMPACT system. Put inversely, 83% of the teachers included in this study about teachers’ “value-added” did not have student test scores available to determine if they were indeed of “added value.” That is, 83% of the teachers evaluated, rather, were assessed on their overall levels of effectiveness or subsequent increases/decreases in effectiveness as per only the subjective observational and other self-report data include within the IMPACT system. Hence, while authors’ findings were presented as hard fact, given the 17% fact, their (exaggerated) conclusions did not at all generalize across teachers given the sample limitations, and despite what they claimed.

In short, the extent to which Dee and Wyckoff (2013) oversimplified very complex data to oversimplify a very complex context and policy situation, after which they exaggerated questionable findings, was of issue, that should have been reconciled or cleared out prior to the study’s release. I should add that this study was published in 2015 in the (economics-oriented and not-educational-policy specific) Journal of Policy Analysis and Management (see here), although I have not since revisited the piece to analyze, comparatively (e.g., via a content analysis), the original 2013 to the final 2015 piece.

Anyhow, they are at it again. Just this past January (2017) they published another report, albeit alongside two additional authors: Melinda Adnot – a Visiting Assistant Professor at the University of Virginia, and Veronica Katz – an Educational Policy PhD student, also at the University of Virginia. This study titled “Teacher Turnover, Teacher Quality, and Student Achievement in DCPS,” was also (prematurely) released as a “working paper” by the same NBER, again, without any internal or external vetting but (irresponsibly) released “for discussion and comment.”

Hence, I provide below my “discussion and comments” below, all the while underscoring how this continues to be problematic, also given the fact that I was contacted by the media for comment. Frankly, no media reports should be released about these (or for that matter any other) “working papers” until they are not only internally but also externally reviewed (e.g., in press or published, post vetting). Unfortunately, as they too commonly do, however, NBER released this report, clearly without such concern. Now, we as the public are responsible for consuming this study with much critical caution, while also advocating that others (and helping others to) do the same. Hence, I write into this post my critiques of this particular study.

First, the primary assumption (i.e., the “conceptual model”) driving this Adnot, Dee, Katz, & Wyckoff (2016) piece is that low-performing teachers should be identified and replaced with more effective teachers. This is akin to the assumption noted in the first Dee and Wyckoff (2013) piece. It should be noted here that in DCPS teachers rated as “Ineffective” or consecutively as “Minimally Effective” are “separated” from the district; hence, DCPS has adopted educational policies that align with this “conceptual model” as well. Interesting to note is how researchers, purportedly external to DCPS, entered into this study with the same a priori “conceptual model.” This, in and of itself, is an indicator of researcher bias (see also forthcoming).

Nonetheless, Adnot et al.’s (2016) highlighted finding was that “on average, DCPS replaced teachers who left with teachers who increased student achievement by 0.08 SD [standard deviations] in math.” Buried further into the report they also found that DCPS replaced teachers who left with teachers who increased student achievement by 0.05 SD in reading (at not a 5% but a 10% statistical significance level). These findings, in simpler but also more realistic terms, mean that (if actually precise and correct, also given all of the problems with how teacher classifications were determined at the DCPS level), “effective” mathematics teachers who replaced “ineffective” mathematics teachers increased student achievement by approximately 2.7%, and “effective” reading teachers who replaced “ineffective” reading teachers increased student achievement by approximately 1.7% (at not a 5% but a 10% statistical significance level). These are hardly groundbreaking results as these proportional movements likely represented one or maybe two total test items on the large-scale standardized tests uses to assess DCPS’s policy impacts.

Interesting to also note is that not only were the “small effects” exaggerated to mean so much more than what they are actually worth (see also forthcoming), but also that only the larger of the two findings – the mathematics finding – is highlighted in the abstract. The complimentary and smaller reading effect is actually buried into the text. Also buried is that these findings pertain only to grade four and eight, general education teachers who were value-added eligible, akin to Dee and Wyckoff’s (2013) earlier piece (e.g., typically 30% of a school’s population, although Dee and Wyckoff’s (2013) piece marked this percentage at 17%).

As mentioned prior, none of this would have likely happened had this piece been internally and/or externally reviewed prior to this study’s release.

Regardless, Adnot et al. (2016) also found that the attrition of relatively higher-performing teachers (e.g., “Effective” or “Highly Effective”) had a negative but also statistically insignificant effect.

Hence, it can be concluded that the only “finding” highlighted in the abstract of this piece was not the only “finding,” but rather buried in this piece were these other findings that researchers (perhaps) purposefully buried into the text. It is possible, in other words, that because these other findings did not support researchers a priori conclusions and claims, researchers chose not to bring attention to these findings, or rather the lack thereof (e.g., in the abstract).

Related, I should note that in a few places the authors exaggerate how, for example, teachers’ effects on their students’ achievement are so tangible, without any mention of contrary reports, namely as published by the American Statistical Association (ASA), in which the ASA evidenced that these (oft-exaggerated) teacher effects account for no more than 1%-14% of the variance in students’ growth scores (see more information here). In fact, teacher effectiveness is very likely not “qualitatively large” as Adnot et al. (2016) argue, without evidence in this piece, and also imply throughout this piece as a foundational part of their aforementioned “conceptual model.”

Likewise, while most everyone would likely agree that there are “stark inequities” in students’ access to effective teachers, how to improve this condition is certainly of great debate, as also not explicitly or implicitly acknowledged throughout this piece. Rather, much disagreement and debate, in fact, still exist regarding whether inducing teacher turnover will get “us” really anywhere in terms of school reform, as also related to how big (or small) teachers’ effects on students’ measurable performance actually are as discussed prior. Accordingly, and perhaps not surprisingly, Adnot et al. (2016) cite only the articles of other researchers, or rather members of their proverbial choir (e.g, Eric Hanushek who, without actual evidence, has been hypothesizing about how replacing “ineffective” teachers with “effective” teachers will reform America’s schools for nearly now one decade) to support these same a priori conclusions. Consequently, the professional integrity of the researchers must be put into check given these simple (albeit biased) errors.

Taking all of this into consideration, I would hardly call the findings they advanced in this piece (and emphasized in the abstract) solid indicators of the “overall positive effects of teacher turnover,” with only one statistically but not practically significant finding of note in mathematics (i.e., a 2.7% increase if accurate)? None of this could/should, accordingly, lead anyone to conclude that “the supply of entering teachers appears to be of sufficient quality to sustain a relatively high turnover rate.”

Hence, this is yet another case of these authors oversimplifying very complex data to oversimplify a very complex context and policy situation, after which they exaggerated negligible findings while also dismissing others.

Related, would we not expect greater results given teachers who are deemed highly effective are to be given one-time bonuses of up to $25,000, and permanent increases to teachers’ base salaries of up to $27,000 per year? This bang, or lack thereof, may not be worth the buck, either.

Additionally, is an annual attrition rate of “low-performing teachers” (e.g., classified as such for one or two consecutive years) in the district, currently hanging at around 46% worth these diminutive results?

Did they also actually find, overall, that “high-poverty schools actually improve as a result of teacher turnover?” I don’t think so, but do give this study a full read to test their as well as my conclusions for yourself (see, again, the full study here).

In the end, Adnot et al. (2016) do conclude that they found “that the overall effect of teacher turnover in DCPS conservatively had no effect on achievement and, under reasonable assumptions, improved achievement.” This is a MUCH more balanced interpretation of this study, although I would certainly question their “reasonable assumptions” (see also prior). Moreover, it is much more curious as to why we had to wait for the actual headline of this study until the end. This is especially important given that others, including members of the media, public, and policy making community, might not make it that far (i.e., trusting only what is in the abstract).


Adnot, M., Dee, T., Katz, V., & Wyckoff, J. (2016). Teacher turnover, teacher quality, and student achievement in DCPS [Washington DC Public Schools]. Cambridge, MA: National Bureau of Economic Research (NBER). Retrieved from

Dee, T., & Wyckoff, J. (2013). Incentives, selection, and teacher performance: Evidence from IMPACT. National Bureau of Economic Research (NBER). Retrieved from

The ACT Testing Corporation (Unsurprisingly) Against America’s Opt-Out Movement

The Research and Evaluation section/division of the ACT testing corporation — ACT, Inc., the nonprofit also famously known for developing the college-entrance ACT test — recently released a policy issue brief titled “Opt-Outs: What Is Lost When Students Do Not Test.” What an interesting read, especially given ACT’s position and perspective as a testing company, also likely being impacted by America’s opt-out-of-testing movement. Should it not be a rule that people writing on policy issues should disclose all potential conflicts-of-interest? They did not here…

Regardless, last year throughout the state of New York, approximately 20% of students opted out of statewide testing. In the state of Washington more than 25% of students opted out. Large and significant numbers of students also opted out in Colorado, Florida, Oregon, Maine, Michigan, New Jersey, and New Mexico. Students are opting out, primarily because of community, parent, and student concerns about the types of tests being administered, the length and number of the tests administered, the time that testing and testing preparation takes away from classroom instruction, and the like.

Because many states also rely on ACT tests for statewide, not just college entrance exam purposes, clearly this is of concern to ACT, Inc. But rather than the corporation rightfully positioning itself on this matter as a company with clear vested interests, ACT Issue Brief author Michelle Croft frames the piece as a genuine plead to help others understand why they should reject the opt out movement, not opt out their own children, generally help to curb and reduce the nation’s opt-out movement, and the like, given the movement’s purportedly negative effects.

Here are some of the reasons ACT’s Croft give in support of not opting out, along with my research-informed commentaries per reason:

  • Scores on annual statewide achievement tests can provide parents, students, educators, and policymakers with valuable information—but only if students participate. What Croft does not note here is that such large scale standardized test scores, without taking into account growth over time (an argument that actually works in favor of VAMs), are so highly correlated with student test-takers’ demographics that test scores do not often tell us much that we would not have known otherwise from what student demographics alone tell us. This is a very true, and also very unfortunate reality, whereby with a small set of student demographics we can actually predict with great (albeit imperfect) certainty students’ test scores without students taking the tests. In other words, if 100% of students opted out, we could still use some of even our most rudimentary statistical techniques to determine what students’ scores would have been regardless; hence, this claim is false.
  • Statewide test scores are one of the most readily available forms of data used by educators to help inform instruction. This is also patently false. Teachers, on average and as per the research, do not use the “[i]ndividual student data [derived via these tests] to identify general strengths and weaknesses, [or to] identify students who may need additional support” for many reasons, including the fact that test scores often come back to teachers after their tested students have moved onto the next grade level. This is also especially true when these tests, as compared to tests that are administered not at the state, but at the district, school, or classroom levels, yield data that is much more instructionally useful. What Croft  does not note is that many research studies, and researchers, have evidenced that the types of tests at the source of the opt out movement are the tests that are also the least instructionally useful (see a prior post on this topic here). Accordingly, Croft’s claim here also contradicts recent research written by some of the luminaries in the field of educational measurement, who collectively support the design of more instructionally useful and sensitive tests in general, to combat the perpetual claims like these surrounding large scale standardized tests (see here).
  • Statewide test scores allow parents and educators to see how students measure up
    to statewide academic standards intended for all students in the state…[by providing] information about a student’s, school’s, or district’s standing compared to others in the state (or across states, if the assessment is used by more than one). See my first argument about student-level demographics, as the same holds true here. Whether these tests are better indicators of what students learned or students’ demographics is certainly of debate, and unfortunately most of the research evidence supports the latter (unless, perhaps, VAMs or growth models are used to measure large scale growth over time).
  • Another benefit…is that the data gives parents an indicator of school quality that can
    help in selecting a school for their children. See my prior argument, again, especially in that test scores are also highly correlated with property/house values; hence, with great certainty one can pick a school just by picking a home one can afford or a neighborhood in which one would like to live, regardless of test scores, as the test scores of the surrounding schools will ultimately reveal themselves to match said property/house values.
  • While grades are important, they [are not as objective as large-scale test scores because they] can also be influenced by a variety of factors unrelated to student achievement, such as grade inflation, noncognitive factors separate from achievement (such as attendance and timely completion of assignments), unintentional bias, or unawareness of performance expectations in subsequent grades (e.g., what it means to be prepared for college). Large-scale standardized tests, of course, are not subject to such biases and unrelated influences, we are to assume and accept as an objective truth.
  • Opt-outs threaten the overall accuracy—and therefore the usefulness—of the data provided. Indeed, this is true, and also one of the arguably positive side-effects of the opt out movement, whereby without large enough samples of students participating in such tests, the extent to which test companies and others can make generalizable results about, in this case, larger student populations is statistically limited. Given the fact that we have been relying on large-scale standardized tests to reform America’s education system for now over the past 30 years, yet we continue to face an “educational crisis” across America’s public schools, perhaps test-based reform policies are not the solution that testing companies like ACT, Inc. continue to argue they are. While perpetuating this argument in favor of reform is financially wise and lucrative, all at the taxpayer’s expense, no to very little research exists to support that using such large scale test-based information helps to reform or improve much of anything.
  • Student assessment data allows for rigorous examination of programs and policies to ensure that resources are allocated towards what works. The one thing large scale standardized tests do help us do, especially as researchers and program evaluators, is help us examine and assess large-scale programs’ and other reform efforts’ impacts. Whether students should have to take tests for just this purpose, however, may also not be worth the nation’s and states’ financial and human resources and investments. With this, most scholars also agree, but more so now when VAMs are used for such large-scale research and evaluation purposes. VAMs are, indeed, a step in the right direction when we are talking about large-scale research.

Author Croft, on behalf of ACT, then makes a series of recommendations to states regarding such large scale testing, again, to help curb the opt out movement. Here are their four recommendations, again, alongside my research-informed commentaries per recommendation:

  • Districts should reduce unnecessary testing. Interesting, here, is that that states are not listed as an additional entity that should reduce unnecessary testing. See my prior comments, especially the one regarding the most instructionally useful tests being at the classroom, school, and/or district levels.
  • Educators and policymakers should improve communication with parents
    about the value gained from having all students take the assessments. Unfortunately, I would not start with the list provided in this piece. Perhaps this blog post will help, however, present a fairer interpretation of their recommendations and the research-based truths surrounding them.
  • Policymakers should discourage opting out…States that allow opt-outs should avoid creating laws, policies, or communications that suggest an endorsement of the practice. Such a recommendation is remiss, in my opinion, given the vested interests of the company making this recommendation.
  • Policymakers should support appropriate uses of test scores. I think we can all agree with this one, although large scale tests scores should not be used and promoted for accountability purposes, as also suggested herein, given the research does not support that doing this actually works either. For a great, recent post on this, click here.

In the end, all of these recommendations, as well as reasons that the opt out movement should be thwarted, are coming via an Issue Brief authored and sponsored by a large scale testing company. This fact, in and of itself, puts everything they position as a set of disinterested recommendations and reasons, at question. This is unfortunate, for ACT Inc., and their roles as the authors and sponsors of this piece.

More Bullying in New Mexico, Now of District School Boards

Following up on a recent post about “New Mexico UnEnchanted” and a follow-up post about how the state’s Public Education Department (PED) is also “Silencing [its] Educators” requiring them to sign contractual documents indicating they will not “diminish the significance or importance of the tests” in the state, it now seems the PED is also attempting to usurp the power and authority of its state’s local school boards. More specifically, the PED is actively seizing power and authority over local school districts’ teacher evaluation systems, and in this case the extent to which sick leave is to be used to hold teachers accountable for their effectiveness.

In New Mexico, courtesy of the PED, all school districts are to include teachers’ absences due to sick leave and personal leave when holding teachers accountable for their effectiveness every year. While teachers’ collective bargaining agreements stipulate that such teacher absences should not be part of teachers’ evaluations, the state has overruled such stipulations. “If a teacher misse[s] a week of school during the year because of serious illness or surgery, or because of their child’s illness, they [are to] automatically receive a low evaluation rating, according to PED rules.”

The main issue here occurred when the school board of the Las Cruces District – the state’s second largest district – passed a resolution that contradicted the state’s above-mentioned plan as pertinent to the teacher attendance component. The PED chief Hanna Skandera, thereafter, threatened a takeover of the school district by the state, after which the school board rescinded its resolution.

In an article released last week written by “New Mexico Senate Democrats [about] PED Bullying [in] Las Cruces Public Schools,” democratic senators are arguing that the PED is “overstepp[ing] its legal authority” in a state in which such “bullying tactics” have no place in educational policy. “This is unwarranted intrusion and interference in the matters of an elected local school board, and it is wrong.”

According to Senate Majority Leader Michael S. Sanchez, “PED is tearing up legal contracts to force teachers not to use sick leave they have bargained for successfully. PED’s action is big government telling elected local officials what to do. Ordinary citizens of New Mexico, and especially all those who regularly decry the intrusion of big government into our lives, should be very upset about what the Governor’s Public Education Department [PED] has done.” He added, “During the recent legislative session we passed, and Governor Martinez signed, a strong anti-bullying bill to clamp down on bullying in the school yard and online. I think next year we may need a bill to stop the Governor and state education bureaucrats from bullying local school boards.”

Even the pretty conservative editor Walt Rubel of the Las Cruces Sun News agrees that the “PED [is using extreme] Strong-Arm [aka bullying] Tactics Against [the] Local School Board.”

As written into this editorial, as per PED chief Hanna Skandera, “the powers and duties of the local board could be suspended by the state if the district “[fails] to meet requirements of law or department rules or standards.” Skandera added, “While this is surely an extreme and undesirable outcome, it may be a potential consequence should the Las Cruces School Board continue to act outside its authority and direct the [Las Cruces] superintendent to violate the law.” As per a PED spokesperson, in response to the school district backing down, “We are pleased with the outcome, recognizing the importance of working together toward a solution.”  

“Right!” – writes Editor Rubel. “The PED and the school board ‘worked together’ in the same way that a lion and an antelope work together to ensure that the lion remains well fed.”

“The problem for PED chief Hanna Skandera is that she has been completely unable to achieve buy-in from teachers, students and parents throughout the state for the reforms being imposed from Santa Fe by Gov. Susana Martinez. That was evident last month when hundreds of students in Las Cruces, and thousands throughout the state, walked out in protest of the first year of the new PARCC testing. They haven’t been able to win support for their reforms on the merits, so they have had to implement them by force and intimidation” and other similar bullying tactics.

Calling all Teachers with Something to Say about NCLB’s Overhaul!

The week of April 13th the Senate Education Committee will begin to markup legislation to overhaul No Child Left Behind (NCLB) for the first time in more than a decade. This controversial law signed by President Bush in 2001 intended to make sure schools were doing a good job educating children, success was measured by test scores and funding was tied to success. The NCLB has angered parents and teachers alike and become a hot rod in the partisan education debate. A revamping of the NCLB could signal a new era in education in the United States and The Takeaway wants to know how teachers would change this legislation to benefit their students.

In case you’re not familiar with it: The Takeaway is an award-winning daily news show produced by WNYC in partnership with The New York Times and Public Radio International.  The show airs across the country on more than 200 stations, reaching upwards of 2 million listeners nationwide.

To share your thoughts on what should or shouldn’t be included in the overhaul of No Child Left Behind please send a voice memo to alternatively you can also leave us a voicemail at 877-8-MY-TAKE.

Please include in your response your name and location, what you teach and how long you’ve been teaching.

Amber (below) is happy to answer any additional questions and can be reached at

Amber Hall

Amber Hall | Planning Editor, The Takeaway

TIME Magazine Needs a TIME Out

In its paper version next week, TIME Magazine will release an article titled “Rotten Apples: It’s Nearly Impossible to Fire a Bad Teacher.” As the title foreshadows, the article is about how “really difficult” it is to fire a bad teacher, and how a group of Silicon Valley investors, along with Campbell Brown (award winning news anchor who recently joined “the cause” in New York, as discussed prior here), want to change that.


The article summarizes, or I should say celebrates, the Vergara v. California trial, the case in which nine public school students (emphasis added as these were not necessarily these students’ ideas) challenged California’s “ironclad tenure system,” arguing that their rights to a good education had been violated by state-level job protections making it “too difficult” to fire bad teachers. Not surprisingly, behind the students stood one of approximately six Silicon Valley technology magnates — David Welch who financed and ultimately won the case (see prior posts about this case here and here). Not surprisingly, again, the author of this article also comes from Silicon Valley. I wonder if they all know each other…

Anyhow, as summarized (and celebrated) in this piece, the Vergara judge found that (1) “[t]enure and other job protections make it harder to fire teachers and therefore effectively work to keep bad ones in the classroom,” and (2) “[b]ad teachers ‘substantially undermine’ a child’s education.” This last point, which according to the judge “shock[ed] the conscience,” came almost entirely thanks to the testimony of Thomas Kane (see prior posts about his Bill and Melinda Gates funded research here and here) and the testimony of Raj Chetty, Kane’s colleague at Harvard who has also been the source of many prior posts (see here and here), but whose research at the source of his testimony has also been recently (and seriously) scrutinized given how bias in his study actually made false this key assertion.

Research recently released by Jesse Rothstein evidences that the claims Chetty made during Vergara were actually false, because Chetty (and his study colleagues) improperly masked the bias underlying their main causal assertion, again, that “bad teachers ‘substantially undermine a child’s education.” It was not bad teachers at the root cause of student failure, as Chetty (and Kane) argued (and continue to argue), but many other influences that account for student success (or failure) than just “bad teachers” (click here to read more).

Nevertheless, the case (and this article) were built on this false premise. As per one of the prosecuting attorneys, “The fact that [they, as in Chetty and Kane] could show how students were actually harmed by bad teachers–that changed the argument.”

The article proceeds to describe how, “predictably,” many teacher unions and the like dismissed pretty much everything associated with the lawsuit, except of course the key assertions advanced by the judge. The only other luminaries of note included “U.S. Secretary of Education Arne Duncan and former D.C. chancellor of schools Michelle Rhee,” who both praised the judge’s decision for challenging the “broken status quo.” The aforementioned, and self-proclaimed “education reformer” Campbell Brown, heralded it as “the most important civil rights suit in decades,” after which she helped to file two similar cases in New York (as discussed, again, here).

As the author of this TIME piece then asserts, this is a war “not [to] be won incrementally, through painstaking compromise with multiple stakeholders, but through sweeping decisions–judicial and otherwise–made possible by the tactical [emphasis added] application of vast personal fortunes,” thanks to the Silicon Valley et al. magnates. “It is a reflection of our politics that no one elected these men [emphasis added] to take on the knotty problem of fixing our public schools, but here they are anyway, fighting for what they firmly believe is in the public interest.” This is, indeed, a “war on teacher tenure” that, funded by this “latest batch of tech tycoons…follows in the footsteps of a long line of older magnates, from the Carnegies and Rockefellers to Walmart’s Waltons, who have also funneled their fortunes into education-reform projects built on private-sector management strategies.”

The article then goes into a biography of Welch (the aforementioned, “bushy eyebrowed” backer of the Vergara case) as if he was a Christopher Columbus incarnate, himself. Welch “didn’t think much about how the system actually functioned, or malfunctioned, until his own children were born in the ’90s and went on to have ‘some public experiences and some private-school experiences.” Another education-expert such an experience makes. He also had a conversation with one superintendent, who wanted more control over his workforce and inspired Welch to, thereafter, lead the war on teacher tenure, because “children [were presumably] being harmed by these laws.”

“[S]tudents who are stuck in classrooms with bad teachers receive an education that is substantially inferior to that of students who are in classrooms with good teachers. Laws that keep bad teachers in the classroom…therefore violate the equal-protection clause of the state constitution….[and] poor and minority students, who are more likely to be in classrooms with bad teachers, endure a disproportionate burden, making the issue a matter of civil rights as well.”

To see two other articles written about this TIME piece, please click here and here. As per Diane Ravitch in the latter link: “More: tenure is due process, the right to a hearing, not a guarantee of a lifetime job. Are there bad apples in teaching? Undoubtedly, just as there are bad apples in medicine, the law, business, and even TIME magazine. There are also bad apples in states where teachers have no tenure. Will abolishing tenure increase the supply of great teachers? Surely we should look to those states where teachers do not have tenure to see how that worked out. Sadly, there is no evidence for the hope, wish, belief, that eliminating due process produces a surge of great teachers.”

“Ineffective,” Veteran, Primary Grade Teacher in Tennessee Resigns

As per a recent article in The Tennessean, it seems yet another teacher has resigned, this time from the 1st grade – a grade in which teacher-level value-added normally does not “count.” This teacher, a 15-year career 1st grade teacher, was recently categorized as “ineffective” in terms of “adding value” to her students’ learning and achievement, as her district added a new test to start holding primary grade teachers accountable for their value-added as well. To read her full letter of resignation and the conditions driving her decision, click here (

Thirty-five percent of her evaluation score was based on student growth or value-added as determined by the Tennessee Value-Added Assessment System (TVAAS), often called outside the state of Tennessee (where it was originally developed) the Education Value-Added Assessment System (EVAAS). Both of these systems should be of increasing familiarity to readers/followers of this blog.

But given a different test recently introduced to help evaluate more teachers like her, again in the primary grades for which no other state-level tests exist (like in grades 3-8), just this year she “received a growth score of 1, [after which she] was placed on a list of ineffective teachers needing additional coaching.” Ironically, the person to serve as her mentor, to help her become better than an “ineffective teacher,” was her own student teacher from a few years prior. It seems her new teacher mentor was not able to increase her former-mentor’s effectiveness in due time, however.

But here’s the real issue: In this case, and exponentially growing numbers of cases like this across the country, the district decided to use a national versus state test (i.e., the SAT 10) which can (but should not) be used to test students in kindergarten and 1st grades, and then more importantly used to attribute growth on these tests over time to their teachers, again, to include more teachers in these evaluation systems.

In just this case, this test’s data were run through the TVAAS system – a system that has been evidenced elsewhere in the research to label teachers ineffective or effective despite contradictory data, sometimes 30% to 50% of the time. In other cases, and in all fairness, other systems do not seem to be faring much better. Regardless, when the foolish add a test that is completely different than the tests being (erroneously) used elsewhere for other teachers, and then foolishly assume that the tests should just work, is foolish, to put it lightly, albeit so unfortunately faddish.

Forcing the Fit Using Alternative “Student Growth” Measures

As discussed on this blog prior, when we are talking about teacher effectiveness as defined by the output derived via VAMs, we are talking about the VAMs that still, to date, only impact 30%-40% of all America’s public school teachers. These are the teachers who typically teach mathematics and/or reading/language arts in grades 3-8.

The teachers who are not VAM-eligible are those who typically teach in the primary grades (i.e., grades K-2), teachers in high-schools who teach more specialized subject areas that are often not tested using large-scale tests (e.g., geometry, calculus), and the teachers who teach out of the subject areas typically tested (e.g., social studies, science [although there is a current push to increase testing in science], physical education, art, music, special education, etc.). Sometimes entire campuses of teachers are not VAM-eligible.

So, what are districts to do when they are to follow the letter of the law, and the accountability policies being financially incentivized by the feds, and then the states (e.g., via Race to the Top and the NCLB waivers)? A new report released by the Institute of Education Sciences (IES), the research arm of the US Department of Education, and produced by Mathematica Inc. (via a contract with the IES) explains what states are up to in order to comply. You can find the summary and full report titled “Alternative student growth measures for teacher evaluation: Profiles of early-adopting districtshere.

What investigators found is that these “early adopters” are using end-of course exams, commercially available tests (e.g., the Galileo assessment system), and Student Learning Objectives (SLOs), which are teacher-developed and administrator-approved to hold teachers accountable for their students’ growth. Although an SLO is about as subjective as it gets in the company of the seemingly objective, more rigorous, and vastly superior VAMs. In addition, the districts sampled are also adopting the same VAM methodologies to keep all analytical approaches (except for the SLOs) the same, almost regardless of the measures used. If the measures exist, or are to be adopted, might as well “take advantage of them” to evaluate value-added because the assessments can be used (and exploited) to measure the value-added of more and more teachers. What?

This is the classic case of what we call “junk science.” We cannot just take whatever tests, regardless of to what standards they are aligned, or not, and run the data through the same value-added calculator in the name of accountability consistency.

Research already tells us that when using different tests, even on the same students of the same teachers at the same time, but using the same VAMs, gives us very, very different results (see, for example, the Papay article here).

Do the feds not see that forcing states to force the fit is completely wrong-headed and simply wrong? They are the ones who funded this study, but apparently see nothing wrong with the absurdity of the study’s results. Rather, they suggest, results should be used to “provide key pieces of information about the [sampled] districts’ experiences” so that results “can be used by other states and districts to decide whether and how to implement alternative assessment-based value-added models or SLOs.”

Force the fit, they say, regardless of the research or really any inkling of commonsense. Perhaps this will help to further line the pockets of more corporate reformers eager to offer, now, not only their VAM services but also even more tests, end-of-course, and SLO systems.

Way to lead the nation!

What is “Value-Added” in Agriculture?

An interesting post came through my email defining “value-added” in its purest form. This comes from the field of agriculture where value-added is often used to model genetic and reproductive trends among livestock, and from where it was taken and applied to the “field” of education in the 1980s.

Here’s the definition: “Value-Added is the process of taking a raw commodity and changing its form to produce a high quality end product. Value-Added is defined as the addition of time, place, and/or form utility to a commodity in order to meet the tastes/preferences of consumers. In other words, value-added is figuring out what consumers want, when they want it, and where they want it – then mak[ing] it and provid[ing] it to them.”

In education, the simplest of translations follows: “Value-Added is the process of taking learning (i.e., a raw material) and changing its form (i.e., via teaching and instruction) to produce a high quality end product (i.e., high test scores). Value-Added is defined as the addition “value” in terms of changing learning’s most observable characteristics (i.e., test scores) in order to meet the (highly politicized) tastes/preferences of consumers. In other words, value-added is figuring out what consumers want, when they want it, and where they want it – then mak[ing] it and provid[ing] it to them.”

If only it were as simple as that. Most unfortunate is that most policymakers, being non-educators/educationists but self-identified education experts, cannot get past this overly-simplified definition, translation, and shallow degreel of depth.